00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2381 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3646 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.113 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.114 The recommended git tool is: git 00:00:00.114 using credential 00000000-0000-0000-0000-000000000002 00:00:00.123 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.150 Fetching changes from the remote Git repository 00:00:00.173 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.200 Using shallow fetch with depth 1 00:00:00.200 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.200 > git --version # timeout=10 00:00:00.233 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.264 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.264 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.438 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.448 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.458 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.458 > git config core.sparsecheckout # timeout=10 00:00:07.467 > git read-tree -mu HEAD # timeout=10 00:00:07.482 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.504 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.504 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.592 [Pipeline] Start of Pipeline 00:00:07.604 [Pipeline] library 00:00:07.605 Loading library shm_lib@master 00:00:07.606 Library shm_lib@master is cached. Copying from home. 00:00:07.627 [Pipeline] node 00:00:07.646 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.647 [Pipeline] { 00:00:07.653 [Pipeline] catchError 00:00:07.654 [Pipeline] { 00:00:07.665 [Pipeline] wrap 00:00:07.674 [Pipeline] { 00:00:07.684 [Pipeline] stage 00:00:07.686 [Pipeline] { (Prologue) 00:00:07.916 [Pipeline] sh 00:00:08.877 + logger -p user.info -t JENKINS-CI 00:00:08.909 [Pipeline] echo 00:00:08.910 Node: GP11 00:00:08.915 [Pipeline] sh 00:00:09.274 [Pipeline] setCustomBuildProperty 00:00:09.284 [Pipeline] echo 00:00:09.286 Cleanup processes 00:00:09.291 [Pipeline] sh 00:00:09.595 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.595 4742 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.613 [Pipeline] sh 00:00:09.912 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.912 ++ awk '{print $1}' 00:00:09.912 ++ grep -v 'sudo pgrep' 00:00:09.912 + sudo kill -9 00:00:09.912 + true 00:00:09.930 [Pipeline] cleanWs 00:00:09.939 [WS-CLEANUP] Deleting project workspace... 00:00:09.939 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.950 [WS-CLEANUP] done 00:00:09.955 [Pipeline] setCustomBuildProperty 00:00:09.971 [Pipeline] sh 00:00:10.262 + sudo git config --global --replace-all safe.directory '*' 00:00:10.355 [Pipeline] httpRequest 00:00:12.274 [Pipeline] echo 00:00:12.276 Sorcerer 10.211.164.20 is alive 00:00:12.284 [Pipeline] retry 00:00:12.286 [Pipeline] { 00:00:12.300 [Pipeline] httpRequest 00:00:12.305 HttpMethod: GET 00:00:12.305 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.306 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.320 Response Code: HTTP/1.1 200 OK 00:00:12.320 Success: Status code 200 is in the accepted range: 200,404 00:00:12.321 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.493 [Pipeline] } 00:00:19.514 [Pipeline] // retry 00:00:19.522 [Pipeline] sh 00:00:19.823 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:19.843 [Pipeline] httpRequest 00:00:20.253 [Pipeline] echo 00:00:20.256 Sorcerer 10.211.164.20 is alive 00:00:20.267 [Pipeline] retry 00:00:20.269 [Pipeline] { 00:00:20.285 [Pipeline] httpRequest 00:00:20.291 HttpMethod: GET 00:00:20.291 URL: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:20.292 Sending request to url: http://10.211.164.20/packages/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:00:20.307 Response Code: HTTP/1.1 200 OK 00:00:20.307 Success: Status code 200 is in the accepted range: 200,404 00:00:20.308 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:02:11.581 [Pipeline] } 00:02:11.599 [Pipeline] // retry 00:02:11.606 [Pipeline] sh 00:02:11.903 + tar --no-same-owner -xf spdk_dcc2ca8f30ea717d7f66cc9c92d44faa802d2c19.tar.gz 00:02:15.215 [Pipeline] sh 00:02:15.509 + git -C spdk log --oneline -n5 00:02:15.509 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:02:15.509 73f18e890 lib/reduce: fix the magic number of empty mapping detection. 00:02:15.509 029355612 bdev_ut: add manual examine bdev unit test case 00:02:15.509 fc96810c2 bdev: remove bdev from examine allow list on unregister 00:02:15.509 a0c128549 bdev/nvme: Make bdev nvme get and set opts APIs public 00:02:15.530 [Pipeline] withCredentials 00:02:15.544 > git --version # timeout=10 00:02:15.558 > git --version # 'git version 2.39.2' 00:02:15.591 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:15.594 [Pipeline] { 00:02:15.606 [Pipeline] retry 00:02:15.609 [Pipeline] { 00:02:15.627 [Pipeline] sh 00:02:16.219 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:02:16.499 [Pipeline] } 00:02:16.517 [Pipeline] // retry 00:02:16.522 [Pipeline] } 00:02:16.539 [Pipeline] // withCredentials 00:02:16.551 [Pipeline] httpRequest 00:02:16.901 [Pipeline] echo 00:02:16.903 Sorcerer 10.211.164.20 is alive 00:02:16.913 [Pipeline] retry 00:02:16.915 [Pipeline] { 00:02:16.929 [Pipeline] httpRequest 00:02:16.935 HttpMethod: GET 00:02:16.936 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:16.937 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:16.941 Response Code: HTTP/1.1 200 OK 00:02:16.942 Success: Status code 200 is in the accepted range: 200,404 00:02:16.942 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:22.232 [Pipeline] } 00:02:22.249 [Pipeline] // retry 00:02:22.257 [Pipeline] sh 00:02:22.555 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:24.488 [Pipeline] sh 00:02:24.781 + git -C dpdk log --oneline -n5 00:02:24.782 caf0f5d395 version: 22.11.4 00:02:24.782 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:24.782 dc9c799c7d vhost: fix missing spinlock unlock 00:02:24.782 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:24.782 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:24.794 [Pipeline] } 00:02:24.807 [Pipeline] // stage 00:02:24.839 [Pipeline] stage 00:02:24.844 [Pipeline] { (Prepare) 00:02:24.865 [Pipeline] writeFile 00:02:24.881 [Pipeline] sh 00:02:25.174 + logger -p user.info -t JENKINS-CI 00:02:25.188 [Pipeline] sh 00:02:25.481 + logger -p user.info -t JENKINS-CI 00:02:25.494 [Pipeline] sh 00:02:25.785 + cat autorun-spdk.conf 00:02:25.785 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:25.785 SPDK_TEST_NVMF=1 00:02:25.785 SPDK_TEST_NVME_CLI=1 00:02:25.785 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:25.785 SPDK_TEST_NVMF_NICS=e810 00:02:25.785 SPDK_TEST_VFIOUSER=1 00:02:25.785 SPDK_RUN_UBSAN=1 00:02:25.785 NET_TYPE=phy 00:02:25.785 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:25.785 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:25.794 RUN_NIGHTLY=1 00:02:25.798 [Pipeline] readFile 00:02:25.834 [Pipeline] withEnv 00:02:25.836 [Pipeline] { 00:02:25.846 [Pipeline] sh 00:02:26.134 + set -ex 00:02:26.134 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:26.134 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:26.134 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:26.134 ++ SPDK_TEST_NVMF=1 00:02:26.134 ++ SPDK_TEST_NVME_CLI=1 00:02:26.134 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:26.134 ++ SPDK_TEST_NVMF_NICS=e810 00:02:26.134 ++ SPDK_TEST_VFIOUSER=1 00:02:26.134 ++ SPDK_RUN_UBSAN=1 00:02:26.134 ++ NET_TYPE=phy 00:02:26.134 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:26.134 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:26.134 ++ RUN_NIGHTLY=1 00:02:26.134 + case $SPDK_TEST_NVMF_NICS in 00:02:26.134 + DRIVERS=ice 00:02:26.134 + [[ tcp == \r\d\m\a ]] 00:02:26.134 + [[ -n ice ]] 00:02:26.134 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:26.134 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:29.447 rmmod: ERROR: Module irdma is not currently loaded 00:02:29.447 rmmod: ERROR: Module i40iw is not currently loaded 00:02:29.447 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:29.447 + true 00:02:29.447 + for D in $DRIVERS 00:02:29.447 + sudo modprobe ice 00:02:29.447 + exit 0 00:02:29.458 [Pipeline] } 00:02:29.474 [Pipeline] // withEnv 00:02:29.479 [Pipeline] } 00:02:29.494 [Pipeline] // stage 00:02:29.502 [Pipeline] catchError 00:02:29.504 [Pipeline] { 00:02:29.516 [Pipeline] timeout 00:02:29.516 Timeout set to expire in 1 hr 0 min 00:02:29.518 [Pipeline] { 00:02:29.529 [Pipeline] stage 00:02:29.531 [Pipeline] { (Tests) 00:02:29.543 [Pipeline] sh 00:02:29.835 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:29.835 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:29.835 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:29.835 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:29.835 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:29.836 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:29.836 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:29.836 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:29.836 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:29.836 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:29.836 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:29.836 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:29.836 + source /etc/os-release 00:02:29.836 ++ NAME='Fedora Linux' 00:02:29.836 ++ VERSION='39 (Cloud Edition)' 00:02:29.836 ++ ID=fedora 00:02:29.836 ++ VERSION_ID=39 00:02:29.836 ++ VERSION_CODENAME= 00:02:29.836 ++ PLATFORM_ID=platform:f39 00:02:29.836 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:29.836 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:29.836 ++ LOGO=fedora-logo-icon 00:02:29.836 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:29.836 ++ HOME_URL=https://fedoraproject.org/ 00:02:29.836 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:29.836 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:29.836 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:29.836 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:29.836 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:29.836 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:29.836 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:29.836 ++ SUPPORT_END=2024-11-12 00:02:29.836 ++ VARIANT='Cloud Edition' 00:02:29.836 ++ VARIANT_ID=cloud 00:02:29.836 + uname -a 00:02:29.836 Linux spdk-gp-11 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:29.836 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:30.776 Hugepages 00:02:30.776 node hugesize free / total 00:02:30.776 node0 1048576kB 0 / 0 00:02:30.776 node0 2048kB 0 / 0 00:02:31.035 node1 1048576kB 0 / 0 00:02:31.035 node1 2048kB 0 / 0 00:02:31.035 00:02:31.035 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:31.035 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:02:31.035 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:02:31.035 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:02:31.035 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:02:31.035 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:02:31.035 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:02:31.035 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:02:31.035 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:02:31.035 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:02:31.035 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:02:31.035 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:02:31.035 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:02:31.035 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:02:31.035 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:02:31.035 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:02:31.035 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:02:31.035 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:02:31.035 + rm -f /tmp/spdk-ld-path 00:02:31.035 + source autorun-spdk.conf 00:02:31.035 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.035 ++ SPDK_TEST_NVMF=1 00:02:31.035 ++ SPDK_TEST_NVME_CLI=1 00:02:31.035 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.035 ++ SPDK_TEST_NVMF_NICS=e810 00:02:31.035 ++ SPDK_TEST_VFIOUSER=1 00:02:31.035 ++ SPDK_RUN_UBSAN=1 00:02:31.035 ++ NET_TYPE=phy 00:02:31.035 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:31.035 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:31.035 ++ RUN_NIGHTLY=1 00:02:31.035 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:31.035 + [[ -n '' ]] 00:02:31.035 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.035 + for M in /var/spdk/build-*-manifest.txt 00:02:31.035 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:31.035 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:31.035 + for M in /var/spdk/build-*-manifest.txt 00:02:31.035 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:31.035 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:31.035 + for M in /var/spdk/build-*-manifest.txt 00:02:31.035 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:31.035 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:31.035 ++ uname 00:02:31.035 + [[ Linux == \L\i\n\u\x ]] 00:02:31.035 + sudo dmesg -T 00:02:31.035 + sudo dmesg --clear 00:02:31.035 + dmesg_pid=5610 00:02:31.035 + [[ Fedora Linux == FreeBSD ]] 00:02:31.035 + sudo dmesg -Tw 00:02:31.035 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.035 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:31.035 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:31.035 + [[ -x /usr/src/fio-static/fio ]] 00:02:31.035 + export FIO_BIN=/usr/src/fio-static/fio 00:02:31.035 + FIO_BIN=/usr/src/fio-static/fio 00:02:31.035 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:31.035 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:31.035 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:31.035 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:31.035 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:31.035 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:31.035 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:31.035 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:31.035 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:31.035 16:08:21 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:31.035 16:08:21 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:31.035 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.035 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:31.035 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:31.036 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.036 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:31.036 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:31.036 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:31.036 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:31.036 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:31.036 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:31.036 16:08:21 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:31.036 16:08:21 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:31.036 16:08:21 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:31.295 16:08:21 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:31.295 16:08:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:31.295 16:08:21 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:31.295 16:08:21 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:31.295 16:08:21 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.295 16:08:21 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.296 16:08:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.296 16:08:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.296 16:08:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.296 16:08:21 -- paths/export.sh@5 -- $ export PATH 00:02:31.296 16:08:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.296 16:08:21 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:31.296 16:08:21 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:31.296 16:08:21 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732028901.XXXXXX 00:02:31.296 16:08:21 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732028901.Ti9KSq 00:02:31.296 16:08:21 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:31.296 16:08:21 -- common/autobuild_common.sh@492 -- $ '[' -n v22.11.4 ']' 00:02:31.296 16:08:21 -- common/autobuild_common.sh@493 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:31.296 16:08:21 -- common/autobuild_common.sh@493 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:31.296 16:08:21 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:31.296 16:08:21 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:31.296 16:08:21 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:31.296 16:08:21 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:31.296 16:08:21 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.296 16:08:21 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:31.296 16:08:21 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:31.296 16:08:21 -- pm/common@17 -- $ local monitor 00:02:31.296 16:08:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.296 16:08:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.296 16:08:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.296 16:08:21 -- pm/common@21 -- $ date +%s 00:02:31.296 16:08:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.296 16:08:21 -- pm/common@21 -- $ date +%s 00:02:31.296 16:08:21 -- pm/common@21 -- $ date +%s 00:02:31.296 16:08:21 -- pm/common@25 -- $ sleep 1 00:02:31.296 16:08:21 -- pm/common@21 -- $ date +%s 00:02:31.296 16:08:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732028901 00:02:31.296 16:08:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732028901 00:02:31.296 16:08:21 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732028901 00:02:31.296 16:08:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732028901 00:02:31.296 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732028901_collect-vmstat.pm.log 00:02:31.296 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732028901_collect-cpu-load.pm.log 00:02:31.296 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732028901_collect-cpu-temp.pm.log 00:02:31.296 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732028901_collect-bmc-pm.bmc.pm.log 00:02:32.241 16:08:22 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:32.241 16:08:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:32.241 16:08:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:32.241 16:08:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.241 16:08:22 -- spdk/autobuild.sh@16 -- $ date -u 00:02:32.241 Tue Nov 19 03:08:22 PM UTC 2024 00:02:32.241 16:08:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:32.241 v25.01-pre-197-gdcc2ca8f3 00:02:32.241 16:08:22 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:32.241 16:08:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:32.241 16:08:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:32.241 16:08:22 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:32.241 16:08:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:32.241 16:08:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.241 ************************************ 00:02:32.241 START TEST ubsan 00:02:32.241 ************************************ 00:02:32.241 16:08:22 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:32.241 using ubsan 00:02:32.241 00:02:32.241 real 0m0.000s 00:02:32.241 user 0m0.000s 00:02:32.241 sys 0m0.000s 00:02:32.241 16:08:22 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:32.241 16:08:22 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:32.241 ************************************ 00:02:32.241 END TEST ubsan 00:02:32.241 ************************************ 00:02:32.241 16:08:22 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:32.241 16:08:22 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:32.241 16:08:22 -- common/autobuild_common.sh@442 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:32.241 16:08:22 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:32.241 16:08:22 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:32.241 16:08:22 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.241 ************************************ 00:02:32.241 START TEST build_native_dpdk 00:02:32.241 ************************************ 00:02:32.241 16:08:22 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.241 16:08:22 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:32.504 caf0f5d395 version: 22.11.4 00:02:32.504 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:32.504 dc9c799c7d vhost: fix missing spinlock unlock 00:02:32.504 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:32.504 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:32.504 patching file config/rte_config.h 00:02:32.504 Hunk #1 succeeded at 60 (offset 1 line). 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@177 -- $ patch -p1 00:02:32.504 patching file lib/pcapng/rte_pcapng.c 00:02:32.504 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@179 -- $ ge 22.11.4 24.07.0 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:32.504 16:08:22 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@183 -- $ dpdk_kmods=false 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@184 -- $ uname -s 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@184 -- $ '[' Linux = FreeBSD ']' 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@188 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:32.504 16:08:22 build_native_dpdk -- common/autobuild_common.sh@188 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:39.092 The Meson build system 00:02:39.092 Version: 1.5.0 00:02:39.092 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:39.092 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:39.092 Build type: native build 00:02:39.092 Program cat found: YES (/usr/bin/cat) 00:02:39.092 Project name: DPDK 00:02:39.092 Project version: 22.11.4 00:02:39.092 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:39.092 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:39.092 Host machine cpu family: x86_64 00:02:39.092 Host machine cpu: x86_64 00:02:39.092 Message: ## Building in Developer Mode ## 00:02:39.092 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:39.092 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:39.092 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:39.092 Program objdump found: YES (/usr/bin/objdump) 00:02:39.092 Program python3 found: YES (/usr/bin/python3) 00:02:39.092 Program cat found: YES (/usr/bin/cat) 00:02:39.092 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:39.092 Checking for size of "void *" : 8 00:02:39.092 Checking for size of "void *" : 8 (cached) 00:02:39.092 Library m found: YES 00:02:39.092 Library numa found: YES 00:02:39.092 Has header "numaif.h" : YES 00:02:39.092 Library fdt found: NO 00:02:39.092 Library execinfo found: NO 00:02:39.092 Has header "execinfo.h" : YES 00:02:39.092 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:39.092 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:39.092 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:39.092 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:39.092 Run-time dependency openssl found: YES 3.1.1 00:02:39.092 Run-time dependency libpcap found: YES 1.10.4 00:02:39.092 Has header "pcap.h" with dependency libpcap: YES 00:02:39.092 Compiler for C supports arguments -Wcast-qual: YES 00:02:39.092 Compiler for C supports arguments -Wdeprecated: YES 00:02:39.092 Compiler for C supports arguments -Wformat: YES 00:02:39.092 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:39.092 Compiler for C supports arguments -Wformat-security: NO 00:02:39.092 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:39.092 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:39.092 Compiler for C supports arguments -Wnested-externs: YES 00:02:39.092 Compiler for C supports arguments -Wold-style-definition: YES 00:02:39.092 Compiler for C supports arguments -Wpointer-arith: YES 00:02:39.092 Compiler for C supports arguments -Wsign-compare: YES 00:02:39.092 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:39.092 Compiler for C supports arguments -Wundef: YES 00:02:39.092 Compiler for C supports arguments -Wwrite-strings: YES 00:02:39.092 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:39.092 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:39.092 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:39.092 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:39.092 Compiler for C supports arguments -mavx512f: YES 00:02:39.092 Checking if "AVX512 checking" compiles: YES 00:02:39.092 Fetching value of define "__SSE4_2__" : 1 00:02:39.092 Fetching value of define "__AES__" : 1 00:02:39.092 Fetching value of define "__AVX__" : 1 00:02:39.092 Fetching value of define "__AVX2__" : (undefined) 00:02:39.092 Fetching value of define "__AVX512BW__" : (undefined) 00:02:39.092 Fetching value of define "__AVX512CD__" : (undefined) 00:02:39.092 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:39.092 Fetching value of define "__AVX512F__" : (undefined) 00:02:39.092 Fetching value of define "__AVX512VL__" : (undefined) 00:02:39.092 Fetching value of define "__PCLMUL__" : 1 00:02:39.092 Fetching value of define "__RDRND__" : 1 00:02:39.092 Fetching value of define "__RDSEED__" : (undefined) 00:02:39.092 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:39.092 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:39.092 Message: lib/kvargs: Defining dependency "kvargs" 00:02:39.092 Message: lib/telemetry: Defining dependency "telemetry" 00:02:39.092 Checking for function "getentropy" : YES 00:02:39.092 Message: lib/eal: Defining dependency "eal" 00:02:39.092 Message: lib/ring: Defining dependency "ring" 00:02:39.092 Message: lib/rcu: Defining dependency "rcu" 00:02:39.092 Message: lib/mempool: Defining dependency "mempool" 00:02:39.092 Message: lib/mbuf: Defining dependency "mbuf" 00:02:39.092 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:39.092 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.092 Compiler for C supports arguments -mpclmul: YES 00:02:39.092 Compiler for C supports arguments -maes: YES 00:02:39.092 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.092 Compiler for C supports arguments -mavx512bw: YES 00:02:39.092 Compiler for C supports arguments -mavx512dq: YES 00:02:39.092 Compiler for C supports arguments -mavx512vl: YES 00:02:39.092 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:39.092 Compiler for C supports arguments -mavx2: YES 00:02:39.092 Compiler for C supports arguments -mavx: YES 00:02:39.092 Message: lib/net: Defining dependency "net" 00:02:39.092 Message: lib/meter: Defining dependency "meter" 00:02:39.092 Message: lib/ethdev: Defining dependency "ethdev" 00:02:39.092 Message: lib/pci: Defining dependency "pci" 00:02:39.092 Message: lib/cmdline: Defining dependency "cmdline" 00:02:39.092 Message: lib/metrics: Defining dependency "metrics" 00:02:39.092 Message: lib/hash: Defining dependency "hash" 00:02:39.092 Message: lib/timer: Defining dependency "timer" 00:02:39.092 Fetching value of define "__AVX2__" : (undefined) (cached) 00:02:39.092 Compiler for C supports arguments -mavx2: YES (cached) 00:02:39.092 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.092 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:39.092 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:39.092 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:39.092 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:39.092 Message: lib/acl: Defining dependency "acl" 00:02:39.092 Message: lib/bbdev: Defining dependency "bbdev" 00:02:39.092 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:39.092 Run-time dependency libelf found: YES 0.191 00:02:39.092 Message: lib/bpf: Defining dependency "bpf" 00:02:39.092 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:39.092 Message: lib/compressdev: Defining dependency "compressdev" 00:02:39.092 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:39.092 Message: lib/distributor: Defining dependency "distributor" 00:02:39.092 Message: lib/efd: Defining dependency "efd" 00:02:39.092 Message: lib/eventdev: Defining dependency "eventdev" 00:02:39.092 Message: lib/gpudev: Defining dependency "gpudev" 00:02:39.092 Message: lib/gro: Defining dependency "gro" 00:02:39.092 Message: lib/gso: Defining dependency "gso" 00:02:39.092 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:39.092 Message: lib/jobstats: Defining dependency "jobstats" 00:02:39.092 Message: lib/latencystats: Defining dependency "latencystats" 00:02:39.092 Message: lib/lpm: Defining dependency "lpm" 00:02:39.092 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.092 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:39.092 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:39.092 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:39.092 Message: lib/member: Defining dependency "member" 00:02:39.092 Message: lib/pcapng: Defining dependency "pcapng" 00:02:39.092 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:39.092 Message: lib/power: Defining dependency "power" 00:02:39.092 Message: lib/rawdev: Defining dependency "rawdev" 00:02:39.092 Message: lib/regexdev: Defining dependency "regexdev" 00:02:39.092 Message: lib/dmadev: Defining dependency "dmadev" 00:02:39.092 Message: lib/rib: Defining dependency "rib" 00:02:39.092 Message: lib/reorder: Defining dependency "reorder" 00:02:39.092 Message: lib/sched: Defining dependency "sched" 00:02:39.092 Message: lib/security: Defining dependency "security" 00:02:39.092 Message: lib/stack: Defining dependency "stack" 00:02:39.092 Has header "linux/userfaultfd.h" : YES 00:02:39.092 Message: lib/vhost: Defining dependency "vhost" 00:02:39.092 Message: lib/ipsec: Defining dependency "ipsec" 00:02:39.092 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.092 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:39.092 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:39.092 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:39.092 Message: lib/fib: Defining dependency "fib" 00:02:39.092 Message: lib/port: Defining dependency "port" 00:02:39.092 Message: lib/pdump: Defining dependency "pdump" 00:02:39.092 Message: lib/table: Defining dependency "table" 00:02:39.092 Message: lib/pipeline: Defining dependency "pipeline" 00:02:39.092 Message: lib/graph: Defining dependency "graph" 00:02:39.092 Message: lib/node: Defining dependency "node" 00:02:39.092 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:39.092 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:39.092 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:39.092 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:39.092 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:39.092 Compiler for C supports arguments -Wno-unused-value: YES 00:02:40.028 Compiler for C supports arguments -Wno-format: YES 00:02:40.028 Compiler for C supports arguments -Wno-format-security: YES 00:02:40.028 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:40.028 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:40.028 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:40.028 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:40.028 Fetching value of define "__AVX2__" : (undefined) (cached) 00:02:40.028 Compiler for C supports arguments -mavx2: YES (cached) 00:02:40.028 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:40.028 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:40.028 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:40.028 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:40.028 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:40.028 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:40.028 Configuring doxy-api.conf using configuration 00:02:40.028 Program sphinx-build found: NO 00:02:40.028 Configuring rte_build_config.h using configuration 00:02:40.028 Message: 00:02:40.028 ================= 00:02:40.028 Applications Enabled 00:02:40.028 ================= 00:02:40.028 00:02:40.028 apps: 00:02:40.028 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:40.028 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:40.028 test-security-perf, 00:02:40.028 00:02:40.028 Message: 00:02:40.028 ================= 00:02:40.028 Libraries Enabled 00:02:40.028 ================= 00:02:40.028 00:02:40.028 libs: 00:02:40.028 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:40.028 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:40.028 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:40.028 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:40.028 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:40.028 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:40.028 table, pipeline, graph, node, 00:02:40.028 00:02:40.028 Message: 00:02:40.028 =============== 00:02:40.028 Drivers Enabled 00:02:40.028 =============== 00:02:40.028 00:02:40.028 common: 00:02:40.028 00:02:40.028 bus: 00:02:40.028 pci, vdev, 00:02:40.028 mempool: 00:02:40.028 ring, 00:02:40.028 dma: 00:02:40.028 00:02:40.028 net: 00:02:40.028 i40e, 00:02:40.028 raw: 00:02:40.028 00:02:40.028 crypto: 00:02:40.028 00:02:40.028 compress: 00:02:40.028 00:02:40.028 regex: 00:02:40.028 00:02:40.028 vdpa: 00:02:40.028 00:02:40.028 event: 00:02:40.028 00:02:40.028 baseband: 00:02:40.028 00:02:40.028 gpu: 00:02:40.028 00:02:40.028 00:02:40.028 Message: 00:02:40.028 ================= 00:02:40.028 Content Skipped 00:02:40.028 ================= 00:02:40.028 00:02:40.028 apps: 00:02:40.028 00:02:40.028 libs: 00:02:40.028 kni: explicitly disabled via build config (deprecated lib) 00:02:40.028 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:40.028 00:02:40.028 drivers: 00:02:40.028 common/cpt: not in enabled drivers build config 00:02:40.028 common/dpaax: not in enabled drivers build config 00:02:40.028 common/iavf: not in enabled drivers build config 00:02:40.028 common/idpf: not in enabled drivers build config 00:02:40.028 common/mvep: not in enabled drivers build config 00:02:40.028 common/octeontx: not in enabled drivers build config 00:02:40.028 bus/auxiliary: not in enabled drivers build config 00:02:40.028 bus/dpaa: not in enabled drivers build config 00:02:40.028 bus/fslmc: not in enabled drivers build config 00:02:40.028 bus/ifpga: not in enabled drivers build config 00:02:40.028 bus/vmbus: not in enabled drivers build config 00:02:40.028 common/cnxk: not in enabled drivers build config 00:02:40.028 common/mlx5: not in enabled drivers build config 00:02:40.028 common/qat: not in enabled drivers build config 00:02:40.028 common/sfc_efx: not in enabled drivers build config 00:02:40.028 mempool/bucket: not in enabled drivers build config 00:02:40.028 mempool/cnxk: not in enabled drivers build config 00:02:40.028 mempool/dpaa: not in enabled drivers build config 00:02:40.028 mempool/dpaa2: not in enabled drivers build config 00:02:40.028 mempool/octeontx: not in enabled drivers build config 00:02:40.028 mempool/stack: not in enabled drivers build config 00:02:40.028 dma/cnxk: not in enabled drivers build config 00:02:40.028 dma/dpaa: not in enabled drivers build config 00:02:40.028 dma/dpaa2: not in enabled drivers build config 00:02:40.028 dma/hisilicon: not in enabled drivers build config 00:02:40.028 dma/idxd: not in enabled drivers build config 00:02:40.028 dma/ioat: not in enabled drivers build config 00:02:40.028 dma/skeleton: not in enabled drivers build config 00:02:40.028 net/af_packet: not in enabled drivers build config 00:02:40.028 net/af_xdp: not in enabled drivers build config 00:02:40.028 net/ark: not in enabled drivers build config 00:02:40.028 net/atlantic: not in enabled drivers build config 00:02:40.028 net/avp: not in enabled drivers build config 00:02:40.028 net/axgbe: not in enabled drivers build config 00:02:40.028 net/bnx2x: not in enabled drivers build config 00:02:40.028 net/bnxt: not in enabled drivers build config 00:02:40.028 net/bonding: not in enabled drivers build config 00:02:40.028 net/cnxk: not in enabled drivers build config 00:02:40.028 net/cxgbe: not in enabled drivers build config 00:02:40.028 net/dpaa: not in enabled drivers build config 00:02:40.028 net/dpaa2: not in enabled drivers build config 00:02:40.028 net/e1000: not in enabled drivers build config 00:02:40.028 net/ena: not in enabled drivers build config 00:02:40.028 net/enetc: not in enabled drivers build config 00:02:40.028 net/enetfec: not in enabled drivers build config 00:02:40.028 net/enic: not in enabled drivers build config 00:02:40.028 net/failsafe: not in enabled drivers build config 00:02:40.028 net/fm10k: not in enabled drivers build config 00:02:40.028 net/gve: not in enabled drivers build config 00:02:40.028 net/hinic: not in enabled drivers build config 00:02:40.028 net/hns3: not in enabled drivers build config 00:02:40.028 net/iavf: not in enabled drivers build config 00:02:40.028 net/ice: not in enabled drivers build config 00:02:40.028 net/idpf: not in enabled drivers build config 00:02:40.028 net/igc: not in enabled drivers build config 00:02:40.028 net/ionic: not in enabled drivers build config 00:02:40.028 net/ipn3ke: not in enabled drivers build config 00:02:40.028 net/ixgbe: not in enabled drivers build config 00:02:40.028 net/kni: not in enabled drivers build config 00:02:40.028 net/liquidio: not in enabled drivers build config 00:02:40.028 net/mana: not in enabled drivers build config 00:02:40.028 net/memif: not in enabled drivers build config 00:02:40.028 net/mlx4: not in enabled drivers build config 00:02:40.028 net/mlx5: not in enabled drivers build config 00:02:40.028 net/mvneta: not in enabled drivers build config 00:02:40.028 net/mvpp2: not in enabled drivers build config 00:02:40.028 net/netvsc: not in enabled drivers build config 00:02:40.028 net/nfb: not in enabled drivers build config 00:02:40.028 net/nfp: not in enabled drivers build config 00:02:40.028 net/ngbe: not in enabled drivers build config 00:02:40.028 net/null: not in enabled drivers build config 00:02:40.028 net/octeontx: not in enabled drivers build config 00:02:40.028 net/octeon_ep: not in enabled drivers build config 00:02:40.028 net/pcap: not in enabled drivers build config 00:02:40.028 net/pfe: not in enabled drivers build config 00:02:40.028 net/qede: not in enabled drivers build config 00:02:40.028 net/ring: not in enabled drivers build config 00:02:40.028 net/sfc: not in enabled drivers build config 00:02:40.028 net/softnic: not in enabled drivers build config 00:02:40.028 net/tap: not in enabled drivers build config 00:02:40.028 net/thunderx: not in enabled drivers build config 00:02:40.028 net/txgbe: not in enabled drivers build config 00:02:40.028 net/vdev_netvsc: not in enabled drivers build config 00:02:40.028 net/vhost: not in enabled drivers build config 00:02:40.028 net/virtio: not in enabled drivers build config 00:02:40.028 net/vmxnet3: not in enabled drivers build config 00:02:40.028 raw/cnxk_bphy: not in enabled drivers build config 00:02:40.028 raw/cnxk_gpio: not in enabled drivers build config 00:02:40.028 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:40.028 raw/ifpga: not in enabled drivers build config 00:02:40.028 raw/ntb: not in enabled drivers build config 00:02:40.028 raw/skeleton: not in enabled drivers build config 00:02:40.028 crypto/armv8: not in enabled drivers build config 00:02:40.028 crypto/bcmfs: not in enabled drivers build config 00:02:40.028 crypto/caam_jr: not in enabled drivers build config 00:02:40.028 crypto/ccp: not in enabled drivers build config 00:02:40.028 crypto/cnxk: not in enabled drivers build config 00:02:40.028 crypto/dpaa_sec: not in enabled drivers build config 00:02:40.028 crypto/dpaa2_sec: not in enabled drivers build config 00:02:40.028 crypto/ipsec_mb: not in enabled drivers build config 00:02:40.028 crypto/mlx5: not in enabled drivers build config 00:02:40.028 crypto/mvsam: not in enabled drivers build config 00:02:40.028 crypto/nitrox: not in enabled drivers build config 00:02:40.028 crypto/null: not in enabled drivers build config 00:02:40.028 crypto/octeontx: not in enabled drivers build config 00:02:40.028 crypto/openssl: not in enabled drivers build config 00:02:40.028 crypto/scheduler: not in enabled drivers build config 00:02:40.028 crypto/uadk: not in enabled drivers build config 00:02:40.028 crypto/virtio: not in enabled drivers build config 00:02:40.028 compress/isal: not in enabled drivers build config 00:02:40.028 compress/mlx5: not in enabled drivers build config 00:02:40.028 compress/octeontx: not in enabled drivers build config 00:02:40.028 compress/zlib: not in enabled drivers build config 00:02:40.028 regex/mlx5: not in enabled drivers build config 00:02:40.028 regex/cn9k: not in enabled drivers build config 00:02:40.028 vdpa/ifc: not in enabled drivers build config 00:02:40.029 vdpa/mlx5: not in enabled drivers build config 00:02:40.029 vdpa/sfc: not in enabled drivers build config 00:02:40.029 event/cnxk: not in enabled drivers build config 00:02:40.029 event/dlb2: not in enabled drivers build config 00:02:40.029 event/dpaa: not in enabled drivers build config 00:02:40.029 event/dpaa2: not in enabled drivers build config 00:02:40.029 event/dsw: not in enabled drivers build config 00:02:40.029 event/opdl: not in enabled drivers build config 00:02:40.029 event/skeleton: not in enabled drivers build config 00:02:40.029 event/sw: not in enabled drivers build config 00:02:40.029 event/octeontx: not in enabled drivers build config 00:02:40.029 baseband/acc: not in enabled drivers build config 00:02:40.029 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:40.029 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:40.029 baseband/la12xx: not in enabled drivers build config 00:02:40.029 baseband/null: not in enabled drivers build config 00:02:40.029 baseband/turbo_sw: not in enabled drivers build config 00:02:40.029 gpu/cuda: not in enabled drivers build config 00:02:40.029 00:02:40.029 00:02:40.029 Build targets in project: 316 00:02:40.029 00:02:40.029 DPDK 22.11.4 00:02:40.029 00:02:40.029 User defined options 00:02:40.029 libdir : lib 00:02:40.029 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:40.029 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:40.029 c_link_args : 00:02:40.029 enable_docs : false 00:02:40.029 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:40.029 enable_kmods : false 00:02:40.029 machine : native 00:02:40.029 tests : false 00:02:40.029 00:02:40.029 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:40.029 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:40.029 16:08:30 build_native_dpdk -- common/autobuild_common.sh@192 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:02:40.029 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:40.360 [1/745] Generating lib/rte_telemetry_def with a custom command 00:02:40.360 [2/745] Generating lib/rte_telemetry_mingw with a custom command 00:02:40.361 [3/745] Generating lib/rte_kvargs_def with a custom command 00:02:40.361 [4/745] Generating lib/rte_kvargs_mingw with a custom command 00:02:40.361 [5/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:40.361 [6/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:40.361 [7/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:40.361 [8/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:40.361 [9/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:40.361 [10/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:40.361 [11/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:40.361 [12/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:40.361 [13/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:40.361 [14/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:40.361 [15/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:40.361 [16/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:40.361 [17/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.361 [18/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.361 [19/745] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:40.361 [20/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:40.361 [21/745] Linking static target lib/librte_kvargs.a 00:02:40.361 [22/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.361 [23/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:40.361 [24/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.361 [25/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.361 [26/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:40.361 [27/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:40.361 [28/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.361 [29/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.361 [30/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:40.361 [31/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:40.361 [32/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:40.361 [33/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:40.633 [34/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:40.633 [35/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.633 [36/745] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:40.633 [37/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:40.633 [38/745] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.633 [39/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:40.633 [40/745] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:40.633 [41/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:40.633 [42/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:40.633 [43/745] Generating lib/rte_eal_mingw with a custom command 00:02:40.633 [44/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:40.633 [45/745] Generating lib/rte_eal_def with a custom command 00:02:40.633 [46/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:40.633 [47/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:40.633 [48/745] Generating lib/rte_ring_mingw with a custom command 00:02:40.633 [49/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.633 [50/745] Generating lib/rte_ring_def with a custom command 00:02:40.633 [51/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:40.633 [52/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:40.633 [53/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:40.633 [54/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:40.633 [55/745] Generating lib/rte_rcu_mingw with a custom command 00:02:40.633 [56/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:40.633 [57/745] Generating lib/rte_rcu_def with a custom command 00:02:40.633 [58/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.633 [59/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.633 [60/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:40.633 [61/745] Generating lib/rte_mempool_def with a custom command 00:02:40.633 [62/745] Generating lib/rte_mempool_mingw with a custom command 00:02:40.633 [63/745] Generating lib/rte_mbuf_mingw with a custom command 00:02:40.633 [64/745] Generating lib/rte_mbuf_def with a custom command 00:02:40.633 [65/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:40.633 [66/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.633 [67/745] Generating lib/rte_net_def with a custom command 00:02:40.633 [68/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:40.633 [69/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.633 [70/745] Generating lib/rte_net_mingw with a custom command 00:02:40.633 [71/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.633 [72/745] Generating lib/rte_meter_def with a custom command 00:02:40.633 [73/745] Generating lib/rte_meter_mingw with a custom command 00:02:40.633 [74/745] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.633 [75/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:40.633 [76/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:40.633 [77/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:40.904 [78/745] Generating lib/rte_ethdev_def with a custom command 00:02:40.904 [79/745] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.904 [80/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:40.904 [81/745] Linking static target lib/librte_ring.a 00:02:40.904 [82/745] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.904 [83/745] Generating lib/rte_ethdev_mingw with a custom command 00:02:40.904 [84/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:40.904 [85/745] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:40.904 [86/745] Linking target lib/librte_kvargs.so.23.0 00:02:40.904 [87/745] Linking static target lib/librte_meter.a 00:02:40.904 [88/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:40.904 [89/745] Generating lib/rte_pci_mingw with a custom command 00:02:40.904 [90/745] Generating lib/rte_pci_def with a custom command 00:02:40.904 [91/745] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:40.904 [92/745] Linking static target lib/librte_pci.a 00:02:41.170 [93/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:41.170 [94/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:41.170 [95/745] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:41.170 [96/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:41.170 [97/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:41.170 [98/745] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.170 [99/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:41.170 [100/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:41.440 [101/745] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.440 [102/745] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.440 [103/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:41.440 [104/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:41.440 [105/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:41.440 [106/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:41.440 [107/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:41.440 [108/745] Generating lib/rte_cmdline_def with a custom command 00:02:41.440 [109/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:41.440 [110/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:41.440 [111/745] Generating lib/rte_cmdline_mingw with a custom command 00:02:41.440 [112/745] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:41.440 [113/745] Generating lib/rte_metrics_def with a custom command 00:02:41.440 [114/745] Generating lib/rte_metrics_mingw with a custom command 00:02:41.440 [115/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:41.440 [116/745] Linking static target lib/librte_telemetry.a 00:02:41.440 [117/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:41.440 [118/745] Generating lib/rte_hash_def with a custom command 00:02:41.440 [119/745] Generating lib/rte_hash_mingw with a custom command 00:02:41.440 [120/745] Generating lib/rte_timer_def with a custom command 00:02:41.440 [121/745] Generating lib/rte_timer_mingw with a custom command 00:02:41.701 [122/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:41.701 [123/745] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:41.701 [124/745] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:41.701 [125/745] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:41.701 [126/745] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:41.701 [127/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:41.701 [128/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:41.701 [129/745] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:41.701 [130/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:41.701 [131/745] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:41.971 [132/745] Generating lib/rte_acl_def with a custom command 00:02:41.971 [133/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:41.971 [134/745] Generating lib/rte_acl_mingw with a custom command 00:02:41.971 [135/745] Generating lib/rte_bbdev_mingw with a custom command 00:02:41.971 [136/745] Generating lib/rte_bbdev_def with a custom command 00:02:41.971 [137/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:41.971 [138/745] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.971 [139/745] Generating lib/rte_bitratestats_def with a custom command 00:02:41.971 [140/745] Generating lib/rte_bitratestats_mingw with a custom command 00:02:41.971 [141/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:41.971 [142/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:41.971 [143/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:41.971 [144/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:41.971 [145/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:41.971 [146/745] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:41.971 [147/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:41.971 [148/745] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.971 [149/745] Generating lib/rte_bpf_def with a custom command 00:02:41.971 [150/745] Generating lib/rte_bpf_mingw with a custom command 00:02:42.233 [151/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:42.233 [152/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.233 [153/745] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:42.233 [154/745] Linking target lib/librte_telemetry.so.23.0 00:02:42.233 [155/745] Generating lib/rte_cfgfile_mingw with a custom command 00:02:42.233 [156/745] Generating lib/rte_cfgfile_def with a custom command 00:02:42.233 [157/745] Generating lib/rte_compressdev_def with a custom command 00:02:42.233 [158/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:42.233 [159/745] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.233 [160/745] Generating lib/rte_compressdev_mingw with a custom command 00:02:42.233 [161/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:42.233 [162/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:42.233 [163/745] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:42.233 [164/745] Linking static target lib/librte_rcu.a 00:02:42.233 [165/745] Generating lib/rte_cryptodev_def with a custom command 00:02:42.233 [166/745] Generating lib/rte_cryptodev_mingw with a custom command 00:02:42.233 [167/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:42.233 [168/745] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:42.233 [169/745] Linking static target lib/librte_timer.a 00:02:42.233 [170/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:42.233 [171/745] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:42.499 [172/745] Generating lib/rte_distributor_def with a custom command 00:02:42.499 [173/745] Linking static target lib/librte_net.a 00:02:42.499 [174/745] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:42.499 [175/745] Generating lib/rte_distributor_mingw with a custom command 00:02:42.499 [176/745] Linking static target lib/librte_cmdline.a 00:02:42.499 [177/745] Generating lib/rte_efd_def with a custom command 00:02:42.499 [178/745] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:42.499 [179/745] Generating lib/rte_efd_mingw with a custom command 00:02:42.499 [180/745] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:42.499 [181/745] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:42.764 [182/745] Linking static target lib/librte_cfgfile.a 00:02:42.764 [183/745] Linking static target lib/librte_metrics.a 00:02:42.764 [184/745] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:42.764 [185/745] Linking static target lib/librte_mempool.a 00:02:42.764 [186/745] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.764 [187/745] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.764 [188/745] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.032 [189/745] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:43.032 [190/745] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:43.032 [191/745] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:43.032 [192/745] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:43.032 [193/745] Linking static target lib/librte_eal.a 00:02:43.032 [194/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:43.032 [195/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:43.032 [196/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:43.032 [197/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:43.032 [198/745] Generating lib/rte_eventdev_def with a custom command 00:02:43.032 [199/745] Generating lib/rte_eventdev_mingw with a custom command 00:02:43.296 [200/745] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.296 [201/745] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:43.296 [202/745] Linking static target lib/librte_bitratestats.a 00:02:43.296 [203/745] Generating lib/rte_gpudev_def with a custom command 00:02:43.296 [204/745] Generating lib/rte_gpudev_mingw with a custom command 00:02:43.296 [205/745] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:43.296 [206/745] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.296 [207/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:43.296 [208/745] Generating lib/rte_gro_def with a custom command 00:02:43.296 [209/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:43.296 [210/745] Generating lib/rte_gro_mingw with a custom command 00:02:43.563 [211/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:43.563 [212/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:43.563 [213/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:43.563 [214/745] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.563 [215/745] Generating lib/rte_gso_def with a custom command 00:02:43.563 [216/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:43.563 [217/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:43.563 [218/745] Generating lib/rte_gso_mingw with a custom command 00:02:43.563 [219/745] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:43.563 [220/745] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:43.563 [221/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:43.827 [222/745] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:43.827 [223/745] Linking static target lib/librte_bbdev.a 00:02:43.827 [224/745] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.827 [225/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:43.827 [226/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:43.827 [227/745] Generating lib/rte_ip_frag_def with a custom command 00:02:43.827 [228/745] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.827 [229/745] Generating lib/rte_ip_frag_mingw with a custom command 00:02:43.827 [230/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:43.827 [231/745] Generating lib/rte_jobstats_def with a custom command 00:02:43.827 [232/745] Generating lib/rte_jobstats_mingw with a custom command 00:02:43.827 [233/745] Generating lib/rte_latencystats_def with a custom command 00:02:43.827 [234/745] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:43.827 [235/745] Generating lib/rte_latencystats_mingw with a custom command 00:02:43.827 [236/745] Linking static target lib/librte_compressdev.a 00:02:44.094 [237/745] Generating lib/rte_lpm_def with a custom command 00:02:44.094 [238/745] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:44.094 [239/745] Generating lib/rte_lpm_mingw with a custom command 00:02:44.094 [240/745] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:44.094 [241/745] Linking static target lib/librte_jobstats.a 00:02:44.094 [242/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:44.094 [243/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:44.360 [244/745] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:44.360 [245/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:44.360 [246/745] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:44.360 [247/745] Linking static target lib/librte_distributor.a 00:02:44.360 [248/745] Generating lib/rte_member_def with a custom command 00:02:44.360 [249/745] Generating lib/rte_member_mingw with a custom command 00:02:44.623 [250/745] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.623 [251/745] Generating lib/rte_pcapng_def with a custom command 00:02:44.623 [252/745] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:44.623 [253/745] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:44.623 [254/745] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:44.623 [255/745] Linking static target lib/librte_bpf.a 00:02:44.623 [256/745] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.623 [257/745] Generating lib/rte_pcapng_mingw with a custom command 00:02:44.623 [258/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:44.893 [259/745] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:44.893 [260/745] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:44.893 [261/745] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.893 [262/745] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:44.893 [263/745] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:44.893 [264/745] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:44.893 [265/745] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:44.893 [266/745] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:44.893 [267/745] Generating lib/rte_power_def with a custom command 00:02:44.893 [268/745] Generating lib/rte_power_mingw with a custom command 00:02:44.893 [269/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:44.893 [270/745] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:44.893 [271/745] Generating lib/rte_rawdev_def with a custom command 00:02:44.893 [272/745] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:44.893 [273/745] Generating lib/rte_rawdev_mingw with a custom command 00:02:44.893 [274/745] Linking static target lib/librte_gpudev.a 00:02:44.893 [275/745] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:44.893 [276/745] Linking static target lib/librte_gro.a 00:02:44.893 [277/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:44.893 [278/745] Generating lib/rte_regexdev_def with a custom command 00:02:44.893 [279/745] Generating lib/rte_regexdev_mingw with a custom command 00:02:44.893 [280/745] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:44.893 [281/745] Generating lib/rte_dmadev_def with a custom command 00:02:45.161 [282/745] Generating lib/rte_dmadev_mingw with a custom command 00:02:45.161 [283/745] Generating lib/rte_rib_def with a custom command 00:02:45.161 [284/745] Generating lib/rte_rib_mingw with a custom command 00:02:45.161 [285/745] Generating lib/rte_reorder_def with a custom command 00:02:45.161 [286/745] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.161 [287/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:45.161 [288/745] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:45.161 [289/745] Generating lib/rte_reorder_mingw with a custom command 00:02:45.425 [290/745] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.425 [291/745] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:45.425 [292/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:45.425 [293/745] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:45.425 [294/745] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.425 [295/745] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:45.425 [296/745] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:45.425 [297/745] Generating lib/rte_sched_def with a custom command 00:02:45.425 [298/745] Linking static target lib/librte_latencystats.a 00:02:45.425 [299/745] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:45.425 [300/745] Generating lib/rte_sched_mingw with a custom command 00:02:45.425 [301/745] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:45.425 [302/745] Generating lib/rte_security_def with a custom command 00:02:45.425 [303/745] Generating lib/rte_security_mingw with a custom command 00:02:45.425 [304/745] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:45.425 [305/745] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:45.425 [306/745] Generating lib/rte_stack_def with a custom command 00:02:45.425 [307/745] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:45.425 [308/745] Generating lib/rte_stack_mingw with a custom command 00:02:45.694 [309/745] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:45.694 [310/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:45.694 [311/745] Linking static target lib/librte_rawdev.a 00:02:45.694 [312/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:45.694 [313/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:45.694 [314/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:45.694 [315/745] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:45.694 [316/745] Linking static target lib/librte_stack.a 00:02:45.694 [317/745] Generating lib/rte_vhost_def with a custom command 00:02:45.694 [318/745] Generating lib/rte_vhost_mingw with a custom command 00:02:45.694 [319/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:45.694 [320/745] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:45.694 [321/745] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:45.694 [322/745] Linking static target lib/librte_dmadev.a 00:02:45.694 [323/745] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.959 [324/745] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:45.959 [325/745] Linking static target lib/librte_ip_frag.a 00:02:45.959 [326/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:45.959 [327/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:45.959 [328/745] Generating lib/rte_ipsec_def with a custom command 00:02:45.959 [329/745] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.959 [330/745] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:45.959 [331/745] Generating lib/rte_ipsec_mingw with a custom command 00:02:45.959 [332/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:46.222 [333/745] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:46.222 [334/745] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.222 [335/745] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.222 [336/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:46.222 [337/745] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.488 [338/745] Generating lib/rte_fib_def with a custom command 00:02:46.489 [339/745] Generating lib/rte_fib_mingw with a custom command 00:02:46.489 [340/745] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:46.489 [341/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:46.489 [342/745] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:46.489 [343/745] Linking static target lib/librte_regexdev.a 00:02:46.489 [344/745] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:46.489 [345/745] Linking static target lib/librte_gso.a 00:02:46.489 [346/745] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.764 [347/745] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:46.764 [348/745] Linking static target lib/librte_efd.a 00:02:46.764 [349/745] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:46.764 [350/745] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:46.764 [351/745] Linking static target lib/librte_pcapng.a 00:02:46.764 [352/745] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.032 [353/745] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:47.032 [354/745] Linking static target lib/librte_lpm.a 00:02:47.032 [355/745] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:47.032 [356/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:47.032 [357/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:47.032 [358/745] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:47.032 [359/745] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:47.032 [360/745] Linking static target lib/librte_reorder.a 00:02:47.032 [361/745] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:47.032 [362/745] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.032 [363/745] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:47.294 [364/745] Generating lib/rte_port_def with a custom command 00:02:47.294 [365/745] Generating lib/rte_port_mingw with a custom command 00:02:47.294 [366/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:47.294 [367/745] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.294 [368/745] Generating lib/rte_pdump_def with a custom command 00:02:47.294 [369/745] Generating lib/rte_pdump_mingw with a custom command 00:02:47.294 [370/745] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:47.294 [371/745] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:47.294 [372/745] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:47.294 [373/745] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:47.294 [374/745] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:47.294 [375/745] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:47.294 [376/745] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:47.294 [377/745] Linking static target lib/acl/libavx2_tmp.a 00:02:47.294 [378/745] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:47.294 [379/745] Linking static target lib/librte_security.a 00:02:47.294 [380/745] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:47.294 [381/745] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.564 [382/745] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.564 [383/745] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:47.564 [384/745] Linking static target lib/librte_hash.a 00:02:47.564 [385/745] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.564 [386/745] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:47.564 [387/745] Linking static target lib/librte_power.a 00:02:47.564 [388/745] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:47.564 [389/745] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.827 [390/745] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:47.827 [391/745] Linking static target lib/acl/libavx512_tmp.a 00:02:47.827 [392/745] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:47.827 [393/745] Linking static target lib/librte_rib.a 00:02:47.827 [394/745] Linking static target lib/librte_acl.a 00:02:47.827 [395/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:47.827 [396/745] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:47.827 [397/745] Linking static target lib/librte_ethdev.a 00:02:48.094 [398/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:48.094 [399/745] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:48.094 [400/745] Generating lib/rte_table_def with a custom command 00:02:48.094 [401/745] Generating lib/rte_table_mingw with a custom command 00:02:48.094 [402/745] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.358 [403/745] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.358 [404/745] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.622 [405/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:48.622 [406/745] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:48.622 [407/745] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.622 [408/745] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:48.622 [409/745] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:48.622 [410/745] Linking static target lib/librte_mbuf.a 00:02:48.622 [411/745] Generating lib/rte_pipeline_def with a custom command 00:02:48.622 [412/745] Generating lib/rte_pipeline_mingw with a custom command 00:02:48.622 [413/745] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:48.622 [414/745] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:48.622 [415/745] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:48.622 [416/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:48.622 [417/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:48.890 [418/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:48.890 [419/745] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:48.890 [420/745] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:48.890 [421/745] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:48.890 [422/745] Linking static target lib/librte_fib.a 00:02:48.890 [423/745] Generating lib/rte_graph_def with a custom command 00:02:48.890 [424/745] Generating lib/rte_graph_mingw with a custom command 00:02:48.890 [425/745] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:48.890 [426/745] Linking static target lib/librte_member.a 00:02:48.890 [427/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:49.156 [428/745] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:49.156 [429/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:49.156 [430/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:49.156 [431/745] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.156 [432/745] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:49.156 [433/745] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:49.156 [434/745] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:49.156 [435/745] Generating lib/rte_node_def with a custom command 00:02:49.156 [436/745] Generating lib/rte_node_mingw with a custom command 00:02:49.156 [437/745] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:49.156 [438/745] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:49.156 [439/745] Linking static target lib/librte_eventdev.a 00:02:49.423 [440/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:49.423 [441/745] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.423 [442/745] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:49.423 [443/745] Linking static target lib/librte_sched.a 00:02:49.423 [444/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:49.423 [445/745] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:49.423 [446/745] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.423 [447/745] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.423 [448/745] Generating drivers/rte_bus_pci_def with a custom command 00:02:49.423 [449/745] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:49.423 [450/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:49.691 [451/745] Generating drivers/rte_bus_vdev_def with a custom command 00:02:49.691 [452/745] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:49.691 [453/745] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:49.691 [454/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:49.691 [455/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:49.691 [456/745] Linking static target lib/librte_cryptodev.a 00:02:49.691 [457/745] Generating drivers/rte_mempool_ring_def with a custom command 00:02:49.691 [458/745] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:49.691 [459/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:49.691 [460/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:49.691 [461/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:49.691 [462/745] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:49.691 [463/745] Linking static target lib/librte_pdump.a 00:02:49.957 [464/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:49.957 [465/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:49.957 [466/745] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:49.957 [467/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:49.957 [468/745] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:49.957 [469/745] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:49.957 [470/745] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:49.957 [471/745] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:49.957 [472/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:49.957 [473/745] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:50.220 [474/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:50.220 [475/745] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:50.220 [476/745] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.220 [477/745] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:50.220 [478/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:50.220 [479/745] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:50.220 [480/745] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.220 [481/745] Generating drivers/rte_net_i40e_def with a custom command 00:02:50.220 [482/745] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:50.220 [483/745] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:50.220 [484/745] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:50.220 [485/745] Linking static target lib/librte_table.a 00:02:50.489 [486/745] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:50.489 [487/745] Linking static target lib/librte_ipsec.a 00:02:50.489 [488/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:50.489 [489/745] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:50.490 [490/745] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:50.490 [491/745] Linking static target drivers/librte_bus_vdev.a 00:02:50.758 [492/745] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:50.758 [493/745] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:50.758 [494/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:50.759 [495/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:51.027 [496/745] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:51.027 [497/745] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:51.027 [498/745] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.027 [499/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:51.027 [500/745] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:51.027 [501/745] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:51.027 [502/745] Linking static target lib/librte_graph.a 00:02:51.027 [503/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:51.027 [504/745] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.027 [505/745] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:51.293 [506/745] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:51.293 [507/745] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:51.293 [508/745] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.293 [509/745] Linking static target drivers/librte_bus_pci.a 00:02:51.293 [510/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:51.293 [511/745] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:51.293 [512/745] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:51.568 [513/745] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.568 [514/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:51.833 [515/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:51.833 [516/745] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.833 [517/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:52.103 [518/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:52.103 [519/745] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:52.103 [520/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:52.103 [521/745] Linking static target lib/librte_port.a 00:02:52.103 [522/745] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:52.103 [523/745] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:52.372 [524/745] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:52.372 [525/745] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:52.372 [526/745] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.672 [527/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:52.672 [528/745] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.672 [529/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:52.672 [530/745] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:52.672 [531/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:52.672 [532/745] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.672 [533/745] Linking static target drivers/librte_mempool_ring.a 00:02:52.672 [534/745] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:52.672 [535/745] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:52.672 [536/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:52.672 [537/745] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:52.941 [538/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:52.941 [539/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:52.941 [540/745] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.941 [541/745] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.211 [542/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:53.476 [543/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:53.477 [544/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:53.477 [545/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:53.477 [546/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:53.750 [547/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:53.750 [548/745] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:53.750 [549/745] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:53.750 [550/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:53.750 [551/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:54.014 [552/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:54.014 [553/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:54.279 [554/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:54.279 [555/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:54.279 [556/745] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:54.279 [557/745] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:54.546 [558/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:54.546 [559/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:54.813 [560/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:54.813 [561/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:54.813 [562/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:54.813 [563/745] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:54.813 [564/745] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:54.813 [565/745] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:54.813 [566/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:55.077 [567/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:55.077 [568/745] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:55.077 [569/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:55.077 [570/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:55.351 [571/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:55.351 [572/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:55.351 [573/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:55.617 [574/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:55.617 [575/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:55.617 [576/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:55.617 [577/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:55.617 [578/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:55.617 [579/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:55.883 [580/745] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.883 [581/745] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:55.883 [582/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:55.883 [583/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:55.883 [584/745] Linking target lib/librte_eal.so.23.0 00:02:56.154 [585/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:56.154 [586/745] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.154 [587/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:56.154 [588/745] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:56.154 [589/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:56.154 [590/745] Linking target lib/librte_ring.so.23.0 00:02:56.154 [591/745] Linking target lib/librte_meter.so.23.0 00:02:56.154 [592/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:56.417 [593/745] Linking target lib/librte_pci.so.23.0 00:02:56.417 [594/745] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:56.417 [595/745] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:56.417 [596/745] Linking target lib/librte_rcu.so.23.0 00:02:56.417 [597/745] Linking target lib/librte_mempool.so.23.0 00:02:56.685 [598/745] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:56.685 [599/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:56.685 [600/745] Linking target lib/librte_timer.so.23.0 00:02:56.685 [601/745] Linking target lib/librte_acl.so.23.0 00:02:56.685 [602/745] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:56.685 [603/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:56.685 [604/745] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:56.685 [605/745] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:56.685 [606/745] Linking target lib/librte_cfgfile.so.23.0 00:02:56.949 [607/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:56.949 [608/745] Linking target lib/librte_jobstats.so.23.0 00:02:56.949 [609/745] Linking target lib/librte_mbuf.so.23.0 00:02:56.949 [610/745] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:56.949 [611/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:56.949 [612/745] Linking target lib/librte_dmadev.so.23.0 00:02:56.949 [613/745] Linking target lib/librte_rawdev.so.23.0 00:02:56.949 [614/745] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:56.949 [615/745] Linking target lib/librte_stack.so.23.0 00:02:56.949 [616/745] Linking target lib/librte_rib.so.23.0 00:02:56.949 [617/745] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:56.949 [618/745] Linking target lib/librte_graph.so.23.0 00:02:56.949 [619/745] Linking target drivers/librte_bus_pci.so.23.0 00:02:56.949 [620/745] Linking target drivers/librte_bus_vdev.so.23.0 00:02:56.949 [621/745] Linking target drivers/librte_mempool_ring.so.23.0 00:02:56.949 [622/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:56.949 [623/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:56.949 [624/745] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:57.213 [625/745] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:57.213 [626/745] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:57.213 [627/745] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:57.213 [628/745] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:57.213 [629/745] Linking target lib/librte_gpudev.so.23.0 00:02:57.213 [630/745] Linking target lib/librte_distributor.so.23.0 00:02:57.213 [631/745] Linking target lib/librte_bbdev.so.23.0 00:02:57.213 [632/745] Linking target lib/librte_compressdev.so.23.0 00:02:57.213 [633/745] Linking target lib/librte_net.so.23.0 00:02:57.213 [634/745] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:57.213 [635/745] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:57.213 [636/745] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:57.213 [637/745] Linking target lib/librte_cryptodev.so.23.0 00:02:57.213 [638/745] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:57.213 [639/745] Linking target lib/librte_regexdev.so.23.0 00:02:57.213 [640/745] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:57.213 [641/745] Linking target lib/librte_reorder.so.23.0 00:02:57.213 [642/745] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:57.213 [643/745] Linking target lib/librte_sched.so.23.0 00:02:57.213 [644/745] Linking target lib/librte_fib.so.23.0 00:02:57.473 [645/745] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:57.473 [646/745] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:57.473 [647/745] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:57.473 [648/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:57.473 [649/745] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:57.473 [650/745] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:57.473 [651/745] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:57.473 [652/745] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:57.473 [653/745] Linking target lib/librte_security.so.23.0 00:02:57.473 [654/745] Linking target lib/librte_cmdline.so.23.0 00:02:57.473 [655/745] Linking target lib/librte_hash.so.23.0 00:02:57.473 [656/745] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:57.473 [657/745] Linking target lib/librte_ethdev.so.23.0 00:02:57.473 [658/745] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:57.473 [659/745] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:57.473 [660/745] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:57.732 [661/745] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:57.732 [662/745] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:57.732 [663/745] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:57.732 [664/745] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:57.732 [665/745] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:57.732 [666/745] Linking target lib/librte_lpm.so.23.0 00:02:57.732 [667/745] Linking target lib/librte_metrics.so.23.0 00:02:57.732 [668/745] Linking target lib/librte_efd.so.23.0 00:02:57.732 [669/745] Linking target lib/librte_member.so.23.0 00:02:57.732 [670/745] Linking target lib/librte_ipsec.so.23.0 00:02:57.732 [671/745] Linking target lib/librte_pcapng.so.23.0 00:02:57.732 [672/745] Linking target lib/librte_gso.so.23.0 00:02:57.732 [673/745] Linking target lib/librte_gro.so.23.0 00:02:57.732 [674/745] Linking target lib/librte_ip_frag.so.23.0 00:02:57.732 [675/745] Linking target lib/librte_power.so.23.0 00:02:57.732 [676/745] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:57.732 [677/745] Linking target lib/librte_bpf.so.23.0 00:02:57.732 [678/745] Linking target lib/librte_eventdev.so.23.0 00:02:57.732 [679/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:57.732 [680/745] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:57.990 [681/745] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:57.990 [682/745] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:57.990 [683/745] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:57.990 [684/745] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:57.990 [685/745] Linking target lib/librte_bitratestats.so.23.0 00:02:57.990 [686/745] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:57.990 [687/745] Linking target lib/librte_latencystats.so.23.0 00:02:57.990 [688/745] Linking target lib/librte_port.so.23.0 00:02:57.990 [689/745] Linking target lib/librte_pdump.so.23.0 00:02:57.990 [690/745] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:57.990 [691/745] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:58.248 [692/745] Linking target lib/librte_table.so.23.0 00:02:58.248 [693/745] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:58.248 [694/745] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:58.248 [695/745] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:58.507 [696/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:58.765 [697/745] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:58.765 [698/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:59.024 [699/745] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:59.024 [700/745] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:59.024 [701/745] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:59.024 [702/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:59.283 [703/745] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:59.542 [704/745] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:59.542 [705/745] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:59.542 [706/745] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:59.542 [707/745] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:59.542 [708/745] Linking static target drivers/librte_net_i40e.a 00:02:59.800 [709/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:00.059 [710/745] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.059 [711/745] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:00.059 [712/745] Linking target drivers/librte_net_i40e.so.23.0 00:03:00.626 [713/745] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:00.626 [714/745] Linking static target lib/librte_node.a 00:03:00.884 [715/745] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.884 [716/745] Linking target lib/librte_node.so.23.0 00:03:01.143 [717/745] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:02.519 [718/745] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:02.519 [719/745] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:10.629 [720/745] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:42.700 [721/745] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:42.700 [722/745] Linking static target lib/librte_vhost.a 00:03:42.700 [723/745] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.700 [724/745] Linking target lib/librte_vhost.so.23.0 00:03:52.682 [725/745] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:52.682 [726/745] Linking static target lib/librte_pipeline.a 00:03:52.941 [727/745] Linking target app/dpdk-dumpcap 00:03:52.941 [728/745] Linking target app/dpdk-test-bbdev 00:03:52.941 [729/745] Linking target app/dpdk-test-pipeline 00:03:52.941 [730/745] Linking target app/dpdk-test-regex 00:03:52.941 [731/745] Linking target app/dpdk-test-flow-perf 00:03:52.941 [732/745] Linking target app/dpdk-test-security-perf 00:03:52.941 [733/745] Linking target app/dpdk-test-eventdev 00:03:52.941 [734/745] Linking target app/dpdk-test-sad 00:03:52.941 [735/745] Linking target app/dpdk-test-fib 00:03:52.941 [736/745] Linking target app/dpdk-test-compress-perf 00:03:52.941 [737/745] Linking target app/dpdk-test-cmdline 00:03:52.941 [738/745] Linking target app/dpdk-pdump 00:03:52.941 [739/745] Linking target app/dpdk-test-acl 00:03:52.941 [740/745] Linking target app/dpdk-test-gpudev 00:03:52.941 [741/745] Linking target app/dpdk-proc-info 00:03:52.941 [742/745] Linking target app/dpdk-test-crypto-perf 00:03:53.200 [743/745] Linking target app/dpdk-testpmd 00:03:55.109 [744/745] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:55.109 [745/745] Linking target lib/librte_pipeline.so.23.0 00:03:55.109 16:09:45 build_native_dpdk -- common/autobuild_common.sh@194 -- $ uname -s 00:03:55.109 16:09:45 build_native_dpdk -- common/autobuild_common.sh@194 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:55.109 16:09:45 build_native_dpdk -- common/autobuild_common.sh@207 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:55.109 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:55.109 [0/1] Installing files. 00:03:55.372 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.372 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:55.373 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:55.374 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.375 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.639 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.640 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:55.641 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:55.642 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:55.642 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.642 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:55.643 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:56.218 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:56.218 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:56.218 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:56.218 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:56.218 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:56.218 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:56.218 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:56.218 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:56.218 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:56.218 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:56.218 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:03:56.218 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.218 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.219 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.220 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:56.221 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:56.222 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:56.222 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:03:56.222 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:56.222 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:03:56.222 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:56.222 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:03:56.222 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:56.222 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:03:56.222 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:56.222 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:03:56.222 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:56.222 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:03:56.222 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:56.222 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:03:56.222 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:56.222 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:03:56.222 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:56.222 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:03:56.222 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:56.222 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:03:56.222 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:56.222 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:03:56.222 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:56.222 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:03:56.222 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:56.222 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:03:56.222 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:56.222 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:03:56.222 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:56.222 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:03:56.222 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:56.222 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:03:56.222 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:56.222 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:03:56.222 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:56.222 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:03:56.222 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:56.222 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:03:56.222 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:56.222 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:03:56.222 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:56.222 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:03:56.222 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:56.222 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:03:56.222 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:56.222 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:03:56.222 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:56.222 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:03:56.222 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:56.222 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:03:56.222 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:56.222 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:03:56.222 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:56.222 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:03:56.222 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:56.222 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:03:56.222 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:56.222 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:03:56.222 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:56.222 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:03:56.222 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:56.222 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:03:56.222 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:56.222 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:03:56.222 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:56.222 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:03:56.222 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:56.222 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:03:56.222 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:56.222 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:03:56.222 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:56.222 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:03:56.222 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:56.222 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:03:56.222 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:56.222 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:03:56.222 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:56.223 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:03:56.223 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:56.223 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:03:56.223 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:56.223 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:03:56.223 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:56.223 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:03:56.223 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:56.223 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:03:56.223 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:56.223 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:03:56.223 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:56.223 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:03:56.223 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:56.223 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:03:56.223 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:56.223 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:03:56.223 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:56.223 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:03:56.223 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:56.223 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:03:56.223 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:56.223 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:03:56.223 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:56.223 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:03:56.223 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:56.223 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:03:56.223 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:56.223 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:56.223 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:56.223 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:56.223 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:56.223 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:56.223 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:56.223 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:56.223 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:56.223 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:56.223 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:56.223 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:56.223 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:56.223 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:56.223 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:56.223 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:56.223 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:56.223 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:56.223 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:56.223 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:56.223 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:56.223 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:56.223 16:09:46 build_native_dpdk -- common/autobuild_common.sh@213 -- $ cat 00:03:56.223 16:09:46 build_native_dpdk -- common/autobuild_common.sh@218 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:56.223 00:03:56.223 real 1m23.872s 00:03:56.223 user 14m25.694s 00:03:56.223 sys 1m53.839s 00:03:56.223 16:09:46 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:56.223 16:09:46 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:03:56.223 ************************************ 00:03:56.223 END TEST build_native_dpdk 00:03:56.223 ************************************ 00:03:56.223 16:09:46 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:56.223 16:09:46 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:56.223 16:09:46 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:56.223 16:09:46 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:56.223 16:09:46 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:56.223 16:09:46 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:56.223 16:09:46 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:56.223 16:09:46 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:56.223 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:56.482 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:56.482 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:56.482 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:56.740 Using 'verbs' RDMA provider 00:04:07.668 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:17.665 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:17.665 Creating mk/config.mk...done. 00:04:17.665 Creating mk/cc.flags.mk...done. 00:04:17.665 Type 'make' to build. 00:04:17.665 16:10:07 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:04:17.665 16:10:07 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:17.665 16:10:07 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:17.665 16:10:07 -- common/autotest_common.sh@10 -- $ set +x 00:04:17.665 ************************************ 00:04:17.665 START TEST make 00:04:17.665 ************************************ 00:04:17.666 16:10:07 make -- common/autotest_common.sh@1129 -- $ make -j48 00:04:17.666 make[1]: Nothing to be done for 'all'. 00:04:19.061 The Meson build system 00:04:19.061 Version: 1.5.0 00:04:19.061 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:19.061 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:19.061 Build type: native build 00:04:19.061 Project name: libvfio-user 00:04:19.061 Project version: 0.0.1 00:04:19.061 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:19.061 C linker for the host machine: gcc ld.bfd 2.40-14 00:04:19.061 Host machine cpu family: x86_64 00:04:19.061 Host machine cpu: x86_64 00:04:19.061 Run-time dependency threads found: YES 00:04:19.061 Library dl found: YES 00:04:19.061 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:19.061 Run-time dependency json-c found: YES 0.17 00:04:19.061 Run-time dependency cmocka found: YES 1.1.7 00:04:19.061 Program pytest-3 found: NO 00:04:19.061 Program flake8 found: NO 00:04:19.061 Program misspell-fixer found: NO 00:04:19.061 Program restructuredtext-lint found: NO 00:04:19.061 Program valgrind found: YES (/usr/bin/valgrind) 00:04:19.061 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:19.061 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:19.061 Compiler for C supports arguments -Wwrite-strings: YES 00:04:19.061 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:19.061 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:19.061 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:19.061 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:19.061 Build targets in project: 8 00:04:19.061 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:19.061 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:19.061 00:04:19.061 libvfio-user 0.0.1 00:04:19.061 00:04:19.061 User defined options 00:04:19.061 buildtype : debug 00:04:19.061 default_library: shared 00:04:19.061 libdir : /usr/local/lib 00:04:19.061 00:04:19.061 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:20.020 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:20.286 [1/37] Compiling C object samples/null.p/null.c.o 00:04:20.286 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:20.286 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:20.286 [4/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:20.286 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:20.286 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:20.286 [7/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:20.286 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:20.286 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:20.286 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:20.286 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:20.286 [12/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:20.286 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:20.286 [14/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:20.286 [15/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:20.286 [16/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:20.286 [17/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:20.551 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:20.551 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:20.551 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:20.551 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:20.551 [22/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:20.551 [23/37] Compiling C object samples/server.p/server.c.o 00:04:20.551 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:20.551 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:20.551 [26/37] Compiling C object samples/client.p/client.c.o 00:04:20.551 [27/37] Linking target samples/client 00:04:20.551 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:20.551 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:04:20.821 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:20.821 [31/37] Linking target test/unit_tests 00:04:20.821 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:20.821 [33/37] Linking target samples/server 00:04:20.821 [34/37] Linking target samples/shadow_ioeventfd_server 00:04:20.821 [35/37] Linking target samples/gpio-pci-idio-16 00:04:21.087 [36/37] Linking target samples/null 00:04:21.087 [37/37] Linking target samples/lspci 00:04:21.087 INFO: autodetecting backend as ninja 00:04:21.087 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:21.087 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:22.030 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:22.030 ninja: no work to do. 00:05:00.749 CC lib/ut_mock/mock.o 00:05:00.749 CC lib/log/log.o 00:05:00.749 CC lib/log/log_flags.o 00:05:00.749 CC lib/ut/ut.o 00:05:00.749 CC lib/log/log_deprecated.o 00:05:00.749 LIB libspdk_ut.a 00:05:00.749 LIB libspdk_ut_mock.a 00:05:00.749 LIB libspdk_log.a 00:05:00.749 SO libspdk_ut.so.2.0 00:05:00.749 SO libspdk_ut_mock.so.6.0 00:05:00.749 SO libspdk_log.so.7.1 00:05:00.749 SYMLINK libspdk_ut_mock.so 00:05:00.749 SYMLINK libspdk_ut.so 00:05:00.749 SYMLINK libspdk_log.so 00:05:00.749 CC lib/ioat/ioat.o 00:05:00.749 CC lib/dma/dma.o 00:05:00.749 CXX lib/trace_parser/trace.o 00:05:00.749 CC lib/util/base64.o 00:05:00.749 CC lib/util/bit_array.o 00:05:00.749 CC lib/util/cpuset.o 00:05:00.749 CC lib/util/crc16.o 00:05:00.749 CC lib/util/crc32.o 00:05:00.749 CC lib/util/crc32c.o 00:05:00.749 CC lib/util/crc32_ieee.o 00:05:00.749 CC lib/util/crc64.o 00:05:00.749 CC lib/util/dif.o 00:05:00.749 CC lib/util/fd.o 00:05:00.749 CC lib/util/fd_group.o 00:05:00.749 CC lib/util/file.o 00:05:00.749 CC lib/util/hexlify.o 00:05:00.749 CC lib/util/iov.o 00:05:00.749 CC lib/util/math.o 00:05:00.749 CC lib/util/net.o 00:05:00.749 CC lib/util/pipe.o 00:05:00.749 CC lib/util/strerror_tls.o 00:05:00.749 CC lib/util/string.o 00:05:00.749 CC lib/util/uuid.o 00:05:00.749 CC lib/util/xor.o 00:05:00.749 CC lib/util/zipf.o 00:05:00.749 CC lib/util/md5.o 00:05:00.749 CC lib/vfio_user/host/vfio_user_pci.o 00:05:00.749 CC lib/vfio_user/host/vfio_user.o 00:05:00.749 LIB libspdk_dma.a 00:05:00.749 SO libspdk_dma.so.5.0 00:05:00.749 SYMLINK libspdk_dma.so 00:05:00.749 LIB libspdk_ioat.a 00:05:00.749 SO libspdk_ioat.so.7.0 00:05:00.749 LIB libspdk_vfio_user.a 00:05:00.749 SYMLINK libspdk_ioat.so 00:05:00.749 SO libspdk_vfio_user.so.5.0 00:05:00.749 SYMLINK libspdk_vfio_user.so 00:05:00.749 LIB libspdk_util.a 00:05:00.749 SO libspdk_util.so.10.1 00:05:00.749 SYMLINK libspdk_util.so 00:05:00.749 CC lib/conf/conf.o 00:05:00.749 CC lib/idxd/idxd.o 00:05:00.749 CC lib/idxd/idxd_user.o 00:05:00.749 CC lib/json/json_parse.o 00:05:00.749 CC lib/idxd/idxd_kernel.o 00:05:00.749 CC lib/json/json_util.o 00:05:00.749 CC lib/vmd/vmd.o 00:05:00.749 CC lib/json/json_write.o 00:05:00.749 CC lib/vmd/led.o 00:05:00.749 CC lib/rdma_utils/rdma_utils.o 00:05:00.749 CC lib/env_dpdk/env.o 00:05:00.749 CC lib/env_dpdk/memory.o 00:05:00.749 CC lib/env_dpdk/pci.o 00:05:00.749 CC lib/env_dpdk/init.o 00:05:00.749 CC lib/env_dpdk/threads.o 00:05:00.749 CC lib/env_dpdk/pci_ioat.o 00:05:00.749 CC lib/env_dpdk/pci_virtio.o 00:05:00.749 CC lib/env_dpdk/pci_vmd.o 00:05:00.749 CC lib/env_dpdk/pci_idxd.o 00:05:00.749 CC lib/env_dpdk/pci_event.o 00:05:00.749 CC lib/env_dpdk/sigbus_handler.o 00:05:00.749 CC lib/env_dpdk/pci_dpdk.o 00:05:00.749 CC lib/env_dpdk/pci_dpdk_2207.o 00:05:00.749 CC lib/env_dpdk/pci_dpdk_2211.o 00:05:00.749 LIB libspdk_conf.a 00:05:00.749 SO libspdk_conf.so.6.0 00:05:00.749 LIB libspdk_rdma_utils.a 00:05:00.749 LIB libspdk_json.a 00:05:00.749 SYMLINK libspdk_conf.so 00:05:00.749 SO libspdk_rdma_utils.so.1.0 00:05:00.749 SO libspdk_json.so.6.0 00:05:00.749 SYMLINK libspdk_rdma_utils.so 00:05:00.749 SYMLINK libspdk_json.so 00:05:00.749 CC lib/rdma_provider/common.o 00:05:00.749 CC lib/rdma_provider/rdma_provider_verbs.o 00:05:00.749 CC lib/jsonrpc/jsonrpc_server.o 00:05:00.749 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:05:00.749 CC lib/jsonrpc/jsonrpc_client.o 00:05:00.749 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:05:00.749 LIB libspdk_idxd.a 00:05:00.749 SO libspdk_idxd.so.12.1 00:05:00.749 LIB libspdk_vmd.a 00:05:00.749 SYMLINK libspdk_idxd.so 00:05:00.749 SO libspdk_vmd.so.6.0 00:05:00.749 SYMLINK libspdk_vmd.so 00:05:00.749 LIB libspdk_rdma_provider.a 00:05:00.749 SO libspdk_rdma_provider.so.7.0 00:05:00.749 LIB libspdk_jsonrpc.a 00:05:00.749 SO libspdk_jsonrpc.so.6.0 00:05:00.749 SYMLINK libspdk_rdma_provider.so 00:05:00.749 SYMLINK libspdk_jsonrpc.so 00:05:00.749 LIB libspdk_trace_parser.a 00:05:00.749 SO libspdk_trace_parser.so.6.0 00:05:00.749 CC lib/rpc/rpc.o 00:05:00.749 SYMLINK libspdk_trace_parser.so 00:05:00.749 LIB libspdk_rpc.a 00:05:00.749 SO libspdk_rpc.so.6.0 00:05:00.749 SYMLINK libspdk_rpc.so 00:05:00.749 CC lib/keyring/keyring.o 00:05:00.749 CC lib/notify/notify.o 00:05:00.749 CC lib/keyring/keyring_rpc.o 00:05:00.749 CC lib/trace/trace.o 00:05:00.749 CC lib/notify/notify_rpc.o 00:05:00.749 CC lib/trace/trace_flags.o 00:05:00.749 CC lib/trace/trace_rpc.o 00:05:00.749 LIB libspdk_notify.a 00:05:00.749 SO libspdk_notify.so.6.0 00:05:00.749 SYMLINK libspdk_notify.so 00:05:00.749 LIB libspdk_keyring.a 00:05:00.749 LIB libspdk_trace.a 00:05:00.749 SO libspdk_keyring.so.2.0 00:05:00.749 SO libspdk_trace.so.11.0 00:05:00.749 SYMLINK libspdk_keyring.so 00:05:00.749 SYMLINK libspdk_trace.so 00:05:00.749 CC lib/thread/thread.o 00:05:00.749 CC lib/sock/sock.o 00:05:00.749 CC lib/thread/iobuf.o 00:05:00.749 CC lib/sock/sock_rpc.o 00:05:01.008 LIB libspdk_env_dpdk.a 00:05:01.008 SO libspdk_env_dpdk.so.15.1 00:05:01.008 SYMLINK libspdk_env_dpdk.so 00:05:01.267 LIB libspdk_sock.a 00:05:01.267 SO libspdk_sock.so.10.0 00:05:01.267 SYMLINK libspdk_sock.so 00:05:01.528 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:01.528 CC lib/nvme/nvme_ctrlr.o 00:05:01.528 CC lib/nvme/nvme_fabric.o 00:05:01.528 CC lib/nvme/nvme_ns_cmd.o 00:05:01.528 CC lib/nvme/nvme_ns.o 00:05:01.528 CC lib/nvme/nvme_pcie_common.o 00:05:01.528 CC lib/nvme/nvme_pcie.o 00:05:01.528 CC lib/nvme/nvme_qpair.o 00:05:01.528 CC lib/nvme/nvme.o 00:05:01.528 CC lib/nvme/nvme_quirks.o 00:05:01.528 CC lib/nvme/nvme_transport.o 00:05:01.528 CC lib/nvme/nvme_discovery.o 00:05:01.528 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:01.528 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:01.528 CC lib/nvme/nvme_tcp.o 00:05:01.528 CC lib/nvme/nvme_opal.o 00:05:01.528 CC lib/nvme/nvme_io_msg.o 00:05:01.528 CC lib/nvme/nvme_poll_group.o 00:05:01.528 CC lib/nvme/nvme_zns.o 00:05:01.528 CC lib/nvme/nvme_stubs.o 00:05:01.528 CC lib/nvme/nvme_auth.o 00:05:01.528 CC lib/nvme/nvme_cuse.o 00:05:01.528 CC lib/nvme/nvme_vfio_user.o 00:05:01.528 CC lib/nvme/nvme_rdma.o 00:05:02.468 LIB libspdk_thread.a 00:05:02.468 SO libspdk_thread.so.11.0 00:05:02.468 SYMLINK libspdk_thread.so 00:05:02.728 CC lib/accel/accel.o 00:05:02.728 CC lib/accel/accel_rpc.o 00:05:02.728 CC lib/accel/accel_sw.o 00:05:02.728 CC lib/fsdev/fsdev.o 00:05:02.728 CC lib/fsdev/fsdev_io.o 00:05:02.728 CC lib/fsdev/fsdev_rpc.o 00:05:02.728 CC lib/vfu_tgt/tgt_endpoint.o 00:05:02.728 CC lib/init/json_config.o 00:05:02.728 CC lib/blob/request.o 00:05:02.728 CC lib/blob/blobstore.o 00:05:02.728 CC lib/vfu_tgt/tgt_rpc.o 00:05:02.728 CC lib/virtio/virtio.o 00:05:02.728 CC lib/init/subsystem.o 00:05:02.728 CC lib/blob/zeroes.o 00:05:02.728 CC lib/virtio/virtio_vhost_user.o 00:05:02.728 CC lib/blob/blob_bs_dev.o 00:05:02.728 CC lib/init/subsystem_rpc.o 00:05:02.728 CC lib/virtio/virtio_vfio_user.o 00:05:02.728 CC lib/init/rpc.o 00:05:02.728 CC lib/virtio/virtio_pci.o 00:05:02.987 LIB libspdk_init.a 00:05:02.987 SO libspdk_init.so.6.0 00:05:02.987 LIB libspdk_virtio.a 00:05:02.987 LIB libspdk_vfu_tgt.a 00:05:02.987 SYMLINK libspdk_init.so 00:05:03.247 SO libspdk_vfu_tgt.so.3.0 00:05:03.247 SO libspdk_virtio.so.7.0 00:05:03.247 SYMLINK libspdk_vfu_tgt.so 00:05:03.247 SYMLINK libspdk_virtio.so 00:05:03.247 CC lib/event/app.o 00:05:03.247 CC lib/event/reactor.o 00:05:03.247 CC lib/event/log_rpc.o 00:05:03.247 CC lib/event/app_rpc.o 00:05:03.247 CC lib/event/scheduler_static.o 00:05:03.505 LIB libspdk_fsdev.a 00:05:03.505 SO libspdk_fsdev.so.2.0 00:05:03.505 SYMLINK libspdk_fsdev.so 00:05:03.764 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:03.764 LIB libspdk_event.a 00:05:03.764 SO libspdk_event.so.14.0 00:05:03.764 SYMLINK libspdk_event.so 00:05:04.023 LIB libspdk_accel.a 00:05:04.023 SO libspdk_accel.so.16.0 00:05:04.023 LIB libspdk_nvme.a 00:05:04.023 SYMLINK libspdk_accel.so 00:05:04.023 SO libspdk_nvme.so.15.0 00:05:04.282 CC lib/bdev/bdev.o 00:05:04.282 CC lib/bdev/bdev_rpc.o 00:05:04.282 CC lib/bdev/bdev_zone.o 00:05:04.282 CC lib/bdev/part.o 00:05:04.282 CC lib/bdev/scsi_nvme.o 00:05:04.282 LIB libspdk_fuse_dispatcher.a 00:05:04.282 SYMLINK libspdk_nvme.so 00:05:04.282 SO libspdk_fuse_dispatcher.so.1.0 00:05:04.282 SYMLINK libspdk_fuse_dispatcher.so 00:05:06.189 LIB libspdk_blob.a 00:05:06.189 SO libspdk_blob.so.11.0 00:05:06.189 SYMLINK libspdk_blob.so 00:05:06.189 CC lib/lvol/lvol.o 00:05:06.189 CC lib/blobfs/blobfs.o 00:05:06.189 CC lib/blobfs/tree.o 00:05:06.757 LIB libspdk_bdev.a 00:05:06.757 LIB libspdk_blobfs.a 00:05:06.757 SO libspdk_blobfs.so.10.0 00:05:07.018 SO libspdk_bdev.so.17.0 00:05:07.018 SYMLINK libspdk_blobfs.so 00:05:07.018 SYMLINK libspdk_bdev.so 00:05:07.018 LIB libspdk_lvol.a 00:05:07.018 SO libspdk_lvol.so.10.0 00:05:07.018 SYMLINK libspdk_lvol.so 00:05:07.018 CC lib/nbd/nbd.o 00:05:07.018 CC lib/nbd/nbd_rpc.o 00:05:07.018 CC lib/ublk/ublk.o 00:05:07.018 CC lib/ublk/ublk_rpc.o 00:05:07.018 CC lib/scsi/dev.o 00:05:07.018 CC lib/scsi/lun.o 00:05:07.018 CC lib/nvmf/ctrlr.o 00:05:07.018 CC lib/nvmf/ctrlr_discovery.o 00:05:07.018 CC lib/ftl/ftl_core.o 00:05:07.018 CC lib/scsi/port.o 00:05:07.018 CC lib/scsi/scsi.o 00:05:07.018 CC lib/nvmf/ctrlr_bdev.o 00:05:07.018 CC lib/ftl/ftl_init.o 00:05:07.018 CC lib/scsi/scsi_bdev.o 00:05:07.018 CC lib/scsi/scsi_pr.o 00:05:07.018 CC lib/ftl/ftl_layout.o 00:05:07.018 CC lib/ftl/ftl_debug.o 00:05:07.018 CC lib/scsi/scsi_rpc.o 00:05:07.018 CC lib/nvmf/subsystem.o 00:05:07.018 CC lib/nvmf/nvmf.o 00:05:07.018 CC lib/scsi/task.o 00:05:07.018 CC lib/nvmf/nvmf_rpc.o 00:05:07.018 CC lib/ftl/ftl_io.o 00:05:07.018 CC lib/ftl/ftl_sb.o 00:05:07.018 CC lib/ftl/ftl_l2p.o 00:05:07.018 CC lib/ftl/ftl_l2p_flat.o 00:05:07.018 CC lib/nvmf/transport.o 00:05:07.018 CC lib/nvmf/tcp.o 00:05:07.018 CC lib/ftl/ftl_nv_cache.o 00:05:07.018 CC lib/nvmf/stubs.o 00:05:07.018 CC lib/nvmf/mdns_server.o 00:05:07.018 CC lib/ftl/ftl_band.o 00:05:07.284 CC lib/nvmf/vfio_user.o 00:05:07.284 CC lib/ftl/ftl_band_ops.o 00:05:07.284 CC lib/ftl/ftl_writer.o 00:05:07.284 CC lib/nvmf/rdma.o 00:05:07.284 CC lib/ftl/ftl_rq.o 00:05:07.284 CC lib/nvmf/auth.o 00:05:07.284 CC lib/ftl/ftl_reloc.o 00:05:07.284 CC lib/ftl/ftl_l2p_cache.o 00:05:07.284 CC lib/ftl/ftl_p2l.o 00:05:07.284 CC lib/ftl/ftl_p2l_log.o 00:05:07.284 CC lib/ftl/mngt/ftl_mngt.o 00:05:07.284 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:07.284 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:07.284 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:07.284 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:07.284 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:07.547 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:07.547 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:07.547 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:07.547 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:07.547 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:07.547 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:07.547 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:07.547 CC lib/ftl/utils/ftl_conf.o 00:05:07.547 CC lib/ftl/utils/ftl_md.o 00:05:07.547 CC lib/ftl/utils/ftl_mempool.o 00:05:07.547 CC lib/ftl/utils/ftl_bitmap.o 00:05:07.547 CC lib/ftl/utils/ftl_property.o 00:05:07.547 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:07.547 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:07.821 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:07.821 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:07.821 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:07.821 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:07.821 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:07.821 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:07.821 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:07.821 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:07.821 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:07.821 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:07.821 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:07.821 CC lib/ftl/base/ftl_base_dev.o 00:05:07.821 CC lib/ftl/base/ftl_base_bdev.o 00:05:07.821 CC lib/ftl/ftl_trace.o 00:05:08.080 LIB libspdk_nbd.a 00:05:08.080 SO libspdk_nbd.so.7.0 00:05:08.080 SYMLINK libspdk_nbd.so 00:05:08.080 LIB libspdk_scsi.a 00:05:08.080 SO libspdk_scsi.so.9.0 00:05:08.341 LIB libspdk_ublk.a 00:05:08.341 SO libspdk_ublk.so.3.0 00:05:08.341 SYMLINK libspdk_scsi.so 00:05:08.341 SYMLINK libspdk_ublk.so 00:05:08.341 CC lib/iscsi/conn.o 00:05:08.341 CC lib/vhost/vhost.o 00:05:08.341 CC lib/iscsi/init_grp.o 00:05:08.341 CC lib/iscsi/iscsi.o 00:05:08.341 CC lib/vhost/vhost_rpc.o 00:05:08.341 CC lib/iscsi/param.o 00:05:08.341 CC lib/vhost/vhost_scsi.o 00:05:08.341 CC lib/vhost/vhost_blk.o 00:05:08.341 CC lib/iscsi/portal_grp.o 00:05:08.341 CC lib/vhost/rte_vhost_user.o 00:05:08.341 CC lib/iscsi/tgt_node.o 00:05:08.341 CC lib/iscsi/iscsi_subsystem.o 00:05:08.341 CC lib/iscsi/iscsi_rpc.o 00:05:08.341 CC lib/iscsi/task.o 00:05:08.599 LIB libspdk_ftl.a 00:05:08.858 SO libspdk_ftl.so.9.0 00:05:09.117 SYMLINK libspdk_ftl.so 00:05:09.685 LIB libspdk_vhost.a 00:05:09.685 SO libspdk_vhost.so.8.0 00:05:09.685 SYMLINK libspdk_vhost.so 00:05:09.944 LIB libspdk_nvmf.a 00:05:09.944 LIB libspdk_iscsi.a 00:05:09.944 SO libspdk_nvmf.so.20.0 00:05:09.944 SO libspdk_iscsi.so.8.0 00:05:10.203 SYMLINK libspdk_iscsi.so 00:05:10.203 SYMLINK libspdk_nvmf.so 00:05:10.463 CC module/env_dpdk/env_dpdk_rpc.o 00:05:10.463 CC module/vfu_device/vfu_virtio.o 00:05:10.463 CC module/vfu_device/vfu_virtio_blk.o 00:05:10.463 CC module/vfu_device/vfu_virtio_scsi.o 00:05:10.463 CC module/vfu_device/vfu_virtio_fs.o 00:05:10.463 CC module/vfu_device/vfu_virtio_rpc.o 00:05:10.463 CC module/keyring/linux/keyring.o 00:05:10.463 CC module/blob/bdev/blob_bdev.o 00:05:10.463 CC module/accel/ioat/accel_ioat.o 00:05:10.463 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:10.463 CC module/keyring/linux/keyring_rpc.o 00:05:10.463 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:10.463 CC module/accel/ioat/accel_ioat_rpc.o 00:05:10.463 CC module/accel/error/accel_error.o 00:05:10.463 CC module/accel/error/accel_error_rpc.o 00:05:10.463 CC module/scheduler/gscheduler/gscheduler.o 00:05:10.463 CC module/accel/dsa/accel_dsa.o 00:05:10.463 CC module/sock/posix/posix.o 00:05:10.463 CC module/accel/dsa/accel_dsa_rpc.o 00:05:10.463 CC module/fsdev/aio/fsdev_aio.o 00:05:10.463 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:10.463 CC module/fsdev/aio/linux_aio_mgr.o 00:05:10.463 CC module/accel/iaa/accel_iaa.o 00:05:10.463 CC module/accel/iaa/accel_iaa_rpc.o 00:05:10.463 CC module/keyring/file/keyring_rpc.o 00:05:10.463 CC module/keyring/file/keyring.o 00:05:10.463 LIB libspdk_env_dpdk_rpc.a 00:05:10.722 SO libspdk_env_dpdk_rpc.so.6.0 00:05:10.722 SYMLINK libspdk_env_dpdk_rpc.so 00:05:10.722 LIB libspdk_keyring_linux.a 00:05:10.722 LIB libspdk_scheduler_gscheduler.a 00:05:10.722 LIB libspdk_scheduler_dpdk_governor.a 00:05:10.722 SO libspdk_keyring_linux.so.1.0 00:05:10.722 SO libspdk_scheduler_gscheduler.so.4.0 00:05:10.722 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:10.722 LIB libspdk_accel_ioat.a 00:05:10.722 LIB libspdk_scheduler_dynamic.a 00:05:10.722 LIB libspdk_accel_error.a 00:05:10.722 SO libspdk_accel_ioat.so.6.0 00:05:10.722 SYMLINK libspdk_keyring_linux.so 00:05:10.722 SYMLINK libspdk_scheduler_gscheduler.so 00:05:10.722 SO libspdk_scheduler_dynamic.so.4.0 00:05:10.722 LIB libspdk_keyring_file.a 00:05:10.722 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:10.722 SO libspdk_accel_error.so.2.0 00:05:10.722 SO libspdk_keyring_file.so.2.0 00:05:10.722 SYMLINK libspdk_accel_ioat.so 00:05:10.722 SYMLINK libspdk_scheduler_dynamic.so 00:05:10.722 LIB libspdk_blob_bdev.a 00:05:10.722 LIB libspdk_accel_dsa.a 00:05:10.722 SYMLINK libspdk_accel_error.so 00:05:10.722 SYMLINK libspdk_keyring_file.so 00:05:10.722 LIB libspdk_accel_iaa.a 00:05:10.722 SO libspdk_blob_bdev.so.11.0 00:05:10.981 SO libspdk_accel_dsa.so.5.0 00:05:10.981 SO libspdk_accel_iaa.so.3.0 00:05:10.981 SYMLINK libspdk_blob_bdev.so 00:05:10.981 SYMLINK libspdk_accel_dsa.so 00:05:10.981 SYMLINK libspdk_accel_iaa.so 00:05:10.981 LIB libspdk_vfu_device.a 00:05:11.246 SO libspdk_vfu_device.so.3.0 00:05:11.246 CC module/blobfs/bdev/blobfs_bdev.o 00:05:11.246 CC module/bdev/null/bdev_null.o 00:05:11.246 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:11.246 CC module/bdev/null/bdev_null_rpc.o 00:05:11.246 CC module/bdev/iscsi/bdev_iscsi.o 00:05:11.246 CC module/bdev/aio/bdev_aio.o 00:05:11.246 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:11.246 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:11.246 CC module/bdev/aio/bdev_aio_rpc.o 00:05:11.246 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:11.246 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:11.246 CC module/bdev/gpt/gpt.o 00:05:11.246 CC module/bdev/ftl/bdev_ftl.o 00:05:11.246 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:11.246 CC module/bdev/gpt/vbdev_gpt.o 00:05:11.246 CC module/bdev/malloc/bdev_malloc.o 00:05:11.246 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:11.246 CC module/bdev/raid/bdev_raid.o 00:05:11.246 CC module/bdev/split/vbdev_split.o 00:05:11.246 CC module/bdev/raid/bdev_raid_rpc.o 00:05:11.246 CC module/bdev/lvol/vbdev_lvol.o 00:05:11.246 CC module/bdev/passthru/vbdev_passthru.o 00:05:11.246 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:11.246 CC module/bdev/nvme/bdev_nvme.o 00:05:11.246 CC module/bdev/split/vbdev_split_rpc.o 00:05:11.246 CC module/bdev/error/vbdev_error.o 00:05:11.246 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:11.246 CC module/bdev/raid/bdev_raid_sb.o 00:05:11.246 CC module/bdev/error/vbdev_error_rpc.o 00:05:11.246 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:11.246 CC module/bdev/raid/raid0.o 00:05:11.246 CC module/bdev/nvme/nvme_rpc.o 00:05:11.246 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:11.246 CC module/bdev/delay/vbdev_delay.o 00:05:11.246 CC module/bdev/raid/raid1.o 00:05:11.246 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:11.246 CC module/bdev/nvme/bdev_mdns_client.o 00:05:11.246 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:11.246 CC module/bdev/nvme/vbdev_opal.o 00:05:11.246 CC module/bdev/raid/concat.o 00:05:11.246 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:11.246 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:11.246 SYMLINK libspdk_vfu_device.so 00:05:11.246 LIB libspdk_fsdev_aio.a 00:05:11.507 SO libspdk_fsdev_aio.so.1.0 00:05:11.507 LIB libspdk_sock_posix.a 00:05:11.507 SO libspdk_sock_posix.so.6.0 00:05:11.507 SYMLINK libspdk_fsdev_aio.so 00:05:11.507 LIB libspdk_blobfs_bdev.a 00:05:11.507 LIB libspdk_bdev_error.a 00:05:11.507 SO libspdk_blobfs_bdev.so.6.0 00:05:11.507 SYMLINK libspdk_sock_posix.so 00:05:11.507 LIB libspdk_bdev_split.a 00:05:11.507 SO libspdk_bdev_error.so.6.0 00:05:11.507 SO libspdk_bdev_split.so.6.0 00:05:11.507 LIB libspdk_bdev_null.a 00:05:11.766 SYMLINK libspdk_blobfs_bdev.so 00:05:11.766 LIB libspdk_bdev_gpt.a 00:05:11.766 SO libspdk_bdev_null.so.6.0 00:05:11.766 SYMLINK libspdk_bdev_error.so 00:05:11.766 SO libspdk_bdev_gpt.so.6.0 00:05:11.766 SYMLINK libspdk_bdev_split.so 00:05:11.766 LIB libspdk_bdev_ftl.a 00:05:11.766 LIB libspdk_bdev_passthru.a 00:05:11.766 SYMLINK libspdk_bdev_null.so 00:05:11.766 SO libspdk_bdev_ftl.so.6.0 00:05:11.766 SO libspdk_bdev_passthru.so.6.0 00:05:11.766 SYMLINK libspdk_bdev_gpt.so 00:05:11.766 LIB libspdk_bdev_aio.a 00:05:11.766 SO libspdk_bdev_aio.so.6.0 00:05:11.766 LIB libspdk_bdev_zone_block.a 00:05:11.767 SYMLINK libspdk_bdev_ftl.so 00:05:11.767 LIB libspdk_bdev_delay.a 00:05:11.767 LIB libspdk_bdev_malloc.a 00:05:11.767 SYMLINK libspdk_bdev_passthru.so 00:05:11.767 SO libspdk_bdev_zone_block.so.6.0 00:05:11.767 SO libspdk_bdev_malloc.so.6.0 00:05:11.767 SO libspdk_bdev_delay.so.6.0 00:05:11.767 SYMLINK libspdk_bdev_aio.so 00:05:11.767 SYMLINK libspdk_bdev_zone_block.so 00:05:11.767 LIB libspdk_bdev_iscsi.a 00:05:11.767 SYMLINK libspdk_bdev_delay.so 00:05:11.767 SYMLINK libspdk_bdev_malloc.so 00:05:11.767 SO libspdk_bdev_iscsi.so.6.0 00:05:12.028 SYMLINK libspdk_bdev_iscsi.so 00:05:12.028 LIB libspdk_bdev_virtio.a 00:05:12.028 LIB libspdk_bdev_lvol.a 00:05:12.028 SO libspdk_bdev_lvol.so.6.0 00:05:12.028 SO libspdk_bdev_virtio.so.6.0 00:05:12.028 SYMLINK libspdk_bdev_lvol.so 00:05:12.028 SYMLINK libspdk_bdev_virtio.so 00:05:12.598 LIB libspdk_bdev_raid.a 00:05:12.598 SO libspdk_bdev_raid.so.6.0 00:05:12.598 SYMLINK libspdk_bdev_raid.so 00:05:13.985 LIB libspdk_bdev_nvme.a 00:05:13.985 SO libspdk_bdev_nvme.so.7.1 00:05:14.245 SYMLINK libspdk_bdev_nvme.so 00:05:14.504 CC module/event/subsystems/iobuf/iobuf.o 00:05:14.504 CC module/event/subsystems/vmd/vmd.o 00:05:14.504 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:14.504 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:14.504 CC module/event/subsystems/keyring/keyring.o 00:05:14.504 CC module/event/subsystems/scheduler/scheduler.o 00:05:14.504 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:14.504 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:14.504 CC module/event/subsystems/fsdev/fsdev.o 00:05:14.504 CC module/event/subsystems/sock/sock.o 00:05:14.504 LIB libspdk_event_keyring.a 00:05:14.763 LIB libspdk_event_vhost_blk.a 00:05:14.763 LIB libspdk_event_fsdev.a 00:05:14.763 LIB libspdk_event_vmd.a 00:05:14.763 LIB libspdk_event_scheduler.a 00:05:14.763 LIB libspdk_event_vfu_tgt.a 00:05:14.763 LIB libspdk_event_sock.a 00:05:14.763 SO libspdk_event_keyring.so.1.0 00:05:14.763 LIB libspdk_event_iobuf.a 00:05:14.763 SO libspdk_event_fsdev.so.1.0 00:05:14.763 SO libspdk_event_scheduler.so.4.0 00:05:14.763 SO libspdk_event_vfu_tgt.so.3.0 00:05:14.763 SO libspdk_event_vhost_blk.so.3.0 00:05:14.763 SO libspdk_event_vmd.so.6.0 00:05:14.763 SO libspdk_event_sock.so.5.0 00:05:14.763 SO libspdk_event_iobuf.so.3.0 00:05:14.763 SYMLINK libspdk_event_keyring.so 00:05:14.763 SYMLINK libspdk_event_fsdev.so 00:05:14.763 SYMLINK libspdk_event_vhost_blk.so 00:05:14.763 SYMLINK libspdk_event_scheduler.so 00:05:14.763 SYMLINK libspdk_event_vfu_tgt.so 00:05:14.763 SYMLINK libspdk_event_sock.so 00:05:14.763 SYMLINK libspdk_event_vmd.so 00:05:14.763 SYMLINK libspdk_event_iobuf.so 00:05:15.023 CC module/event/subsystems/accel/accel.o 00:05:15.023 LIB libspdk_event_accel.a 00:05:15.023 SO libspdk_event_accel.so.6.0 00:05:15.023 SYMLINK libspdk_event_accel.so 00:05:15.284 CC module/event/subsystems/bdev/bdev.o 00:05:15.543 LIB libspdk_event_bdev.a 00:05:15.543 SO libspdk_event_bdev.so.6.0 00:05:15.543 SYMLINK libspdk_event_bdev.so 00:05:15.803 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:15.803 CC module/event/subsystems/nbd/nbd.o 00:05:15.803 CC module/event/subsystems/scsi/scsi.o 00:05:15.803 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:15.803 CC module/event/subsystems/ublk/ublk.o 00:05:15.803 LIB libspdk_event_nbd.a 00:05:15.803 LIB libspdk_event_ublk.a 00:05:15.803 LIB libspdk_event_scsi.a 00:05:15.803 SO libspdk_event_nbd.so.6.0 00:05:15.803 SO libspdk_event_ublk.so.3.0 00:05:15.803 SO libspdk_event_scsi.so.6.0 00:05:16.062 SYMLINK libspdk_event_nbd.so 00:05:16.062 SYMLINK libspdk_event_ublk.so 00:05:16.062 SYMLINK libspdk_event_scsi.so 00:05:16.062 LIB libspdk_event_nvmf.a 00:05:16.062 SO libspdk_event_nvmf.so.6.0 00:05:16.062 SYMLINK libspdk_event_nvmf.so 00:05:16.062 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:16.062 CC module/event/subsystems/iscsi/iscsi.o 00:05:16.323 LIB libspdk_event_vhost_scsi.a 00:05:16.323 SO libspdk_event_vhost_scsi.so.3.0 00:05:16.323 LIB libspdk_event_iscsi.a 00:05:16.323 SO libspdk_event_iscsi.so.6.0 00:05:16.323 SYMLINK libspdk_event_vhost_scsi.so 00:05:16.323 SYMLINK libspdk_event_iscsi.so 00:05:16.583 SO libspdk.so.6.0 00:05:16.583 SYMLINK libspdk.so 00:05:16.583 CXX app/trace/trace.o 00:05:16.583 CC app/trace_record/trace_record.o 00:05:16.583 CC app/spdk_top/spdk_top.o 00:05:16.583 CC app/spdk_nvme_discover/discovery_aer.o 00:05:16.583 CC app/spdk_lspci/spdk_lspci.o 00:05:16.583 TEST_HEADER include/spdk/accel.h 00:05:16.583 CC app/spdk_nvme_identify/identify.o 00:05:16.583 CC test/rpc_client/rpc_client_test.o 00:05:16.583 TEST_HEADER include/spdk/accel_module.h 00:05:16.583 TEST_HEADER include/spdk/assert.h 00:05:16.583 TEST_HEADER include/spdk/barrier.h 00:05:16.583 TEST_HEADER include/spdk/base64.h 00:05:16.583 CC app/spdk_nvme_perf/perf.o 00:05:16.583 TEST_HEADER include/spdk/bdev.h 00:05:16.583 TEST_HEADER include/spdk/bdev_module.h 00:05:16.583 TEST_HEADER include/spdk/bdev_zone.h 00:05:16.583 TEST_HEADER include/spdk/bit_array.h 00:05:16.583 TEST_HEADER include/spdk/bit_pool.h 00:05:16.583 TEST_HEADER include/spdk/blob_bdev.h 00:05:16.583 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:16.583 TEST_HEADER include/spdk/blobfs.h 00:05:16.583 TEST_HEADER include/spdk/blob.h 00:05:16.583 TEST_HEADER include/spdk/conf.h 00:05:16.583 TEST_HEADER include/spdk/config.h 00:05:16.583 TEST_HEADER include/spdk/cpuset.h 00:05:16.583 TEST_HEADER include/spdk/crc16.h 00:05:16.583 TEST_HEADER include/spdk/crc32.h 00:05:16.583 TEST_HEADER include/spdk/crc64.h 00:05:16.583 TEST_HEADER include/spdk/dma.h 00:05:16.583 TEST_HEADER include/spdk/dif.h 00:05:16.583 TEST_HEADER include/spdk/endian.h 00:05:16.583 TEST_HEADER include/spdk/env_dpdk.h 00:05:16.583 TEST_HEADER include/spdk/env.h 00:05:16.583 TEST_HEADER include/spdk/event.h 00:05:16.583 TEST_HEADER include/spdk/fd_group.h 00:05:16.848 TEST_HEADER include/spdk/fd.h 00:05:16.848 TEST_HEADER include/spdk/file.h 00:05:16.848 TEST_HEADER include/spdk/fsdev.h 00:05:16.848 TEST_HEADER include/spdk/fsdev_module.h 00:05:16.848 TEST_HEADER include/spdk/ftl.h 00:05:16.848 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:16.848 TEST_HEADER include/spdk/gpt_spec.h 00:05:16.848 TEST_HEADER include/spdk/histogram_data.h 00:05:16.848 TEST_HEADER include/spdk/hexlify.h 00:05:16.848 TEST_HEADER include/spdk/idxd.h 00:05:16.848 TEST_HEADER include/spdk/idxd_spec.h 00:05:16.848 TEST_HEADER include/spdk/init.h 00:05:16.848 TEST_HEADER include/spdk/ioat.h 00:05:16.848 TEST_HEADER include/spdk/ioat_spec.h 00:05:16.848 TEST_HEADER include/spdk/iscsi_spec.h 00:05:16.848 TEST_HEADER include/spdk/jsonrpc.h 00:05:16.848 TEST_HEADER include/spdk/json.h 00:05:16.848 TEST_HEADER include/spdk/keyring.h 00:05:16.848 TEST_HEADER include/spdk/keyring_module.h 00:05:16.848 TEST_HEADER include/spdk/likely.h 00:05:16.848 TEST_HEADER include/spdk/log.h 00:05:16.848 TEST_HEADER include/spdk/lvol.h 00:05:16.848 TEST_HEADER include/spdk/md5.h 00:05:16.848 TEST_HEADER include/spdk/memory.h 00:05:16.848 TEST_HEADER include/spdk/nbd.h 00:05:16.848 TEST_HEADER include/spdk/mmio.h 00:05:16.848 TEST_HEADER include/spdk/net.h 00:05:16.848 TEST_HEADER include/spdk/notify.h 00:05:16.848 TEST_HEADER include/spdk/nvme.h 00:05:16.848 TEST_HEADER include/spdk/nvme_intel.h 00:05:16.848 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:16.848 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:16.848 TEST_HEADER include/spdk/nvme_spec.h 00:05:16.848 TEST_HEADER include/spdk/nvme_zns.h 00:05:16.848 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:16.848 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:16.848 TEST_HEADER include/spdk/nvmf.h 00:05:16.848 TEST_HEADER include/spdk/nvmf_spec.h 00:05:16.848 TEST_HEADER include/spdk/nvmf_transport.h 00:05:16.848 TEST_HEADER include/spdk/opal.h 00:05:16.848 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:16.848 TEST_HEADER include/spdk/opal_spec.h 00:05:16.848 TEST_HEADER include/spdk/pci_ids.h 00:05:16.848 TEST_HEADER include/spdk/pipe.h 00:05:16.848 TEST_HEADER include/spdk/queue.h 00:05:16.848 TEST_HEADER include/spdk/reduce.h 00:05:16.848 TEST_HEADER include/spdk/rpc.h 00:05:16.848 TEST_HEADER include/spdk/scheduler.h 00:05:16.848 TEST_HEADER include/spdk/scsi.h 00:05:16.848 TEST_HEADER include/spdk/scsi_spec.h 00:05:16.848 TEST_HEADER include/spdk/sock.h 00:05:16.848 TEST_HEADER include/spdk/stdinc.h 00:05:16.848 TEST_HEADER include/spdk/thread.h 00:05:16.848 TEST_HEADER include/spdk/string.h 00:05:16.848 TEST_HEADER include/spdk/trace.h 00:05:16.848 TEST_HEADER include/spdk/trace_parser.h 00:05:16.848 TEST_HEADER include/spdk/tree.h 00:05:16.848 TEST_HEADER include/spdk/ublk.h 00:05:16.848 TEST_HEADER include/spdk/util.h 00:05:16.848 TEST_HEADER include/spdk/uuid.h 00:05:16.848 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:16.848 TEST_HEADER include/spdk/version.h 00:05:16.848 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:16.848 TEST_HEADER include/spdk/vhost.h 00:05:16.848 TEST_HEADER include/spdk/vmd.h 00:05:16.848 TEST_HEADER include/spdk/xor.h 00:05:16.848 TEST_HEADER include/spdk/zipf.h 00:05:16.848 CXX test/cpp_headers/accel.o 00:05:16.848 CXX test/cpp_headers/accel_module.o 00:05:16.848 CXX test/cpp_headers/assert.o 00:05:16.848 CXX test/cpp_headers/base64.o 00:05:16.848 CXX test/cpp_headers/barrier.o 00:05:16.848 CXX test/cpp_headers/bdev.o 00:05:16.848 CXX test/cpp_headers/bdev_module.o 00:05:16.848 CXX test/cpp_headers/bit_array.o 00:05:16.848 CXX test/cpp_headers/bdev_zone.o 00:05:16.848 CXX test/cpp_headers/bit_pool.o 00:05:16.848 CXX test/cpp_headers/blob_bdev.o 00:05:16.848 CXX test/cpp_headers/blobfs_bdev.o 00:05:16.848 CXX test/cpp_headers/blobfs.o 00:05:16.848 CXX test/cpp_headers/blob.o 00:05:16.848 CC app/spdk_dd/spdk_dd.o 00:05:16.848 CXX test/cpp_headers/conf.o 00:05:16.848 CXX test/cpp_headers/config.o 00:05:16.848 CXX test/cpp_headers/cpuset.o 00:05:16.848 CXX test/cpp_headers/crc16.o 00:05:16.848 CC app/nvmf_tgt/nvmf_main.o 00:05:16.848 CC app/iscsi_tgt/iscsi_tgt.o 00:05:16.848 CXX test/cpp_headers/crc32.o 00:05:16.849 CC examples/util/zipf/zipf.o 00:05:16.849 CC test/env/vtophys/vtophys.o 00:05:16.849 CC test/app/stub/stub.o 00:05:16.849 CC test/app/jsoncat/jsoncat.o 00:05:16.849 CC app/spdk_tgt/spdk_tgt.o 00:05:16.849 CC test/env/memory/memory_ut.o 00:05:16.849 CC examples/ioat/verify/verify.o 00:05:16.849 CC examples/ioat/perf/perf.o 00:05:16.849 CC test/app/histogram_perf/histogram_perf.o 00:05:16.849 CC test/thread/poller_perf/poller_perf.o 00:05:16.849 CC test/env/pci/pci_ut.o 00:05:16.849 CC app/fio/nvme/fio_plugin.o 00:05:16.849 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:16.849 CC app/fio/bdev/fio_plugin.o 00:05:16.849 CC test/app/bdev_svc/bdev_svc.o 00:05:16.849 CC test/dma/test_dma/test_dma.o 00:05:17.115 CC test/env/mem_callbacks/mem_callbacks.o 00:05:17.115 LINK spdk_lspci 00:05:17.115 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:17.115 LINK rpc_client_test 00:05:17.115 LINK jsoncat 00:05:17.115 LINK interrupt_tgt 00:05:17.115 LINK zipf 00:05:17.115 LINK spdk_nvme_discover 00:05:17.115 LINK vtophys 00:05:17.115 CXX test/cpp_headers/crc64.o 00:05:17.115 LINK poller_perf 00:05:17.115 CXX test/cpp_headers/dif.o 00:05:17.115 LINK histogram_perf 00:05:17.115 CXX test/cpp_headers/dma.o 00:05:17.115 LINK env_dpdk_post_init 00:05:17.115 LINK nvmf_tgt 00:05:17.115 LINK spdk_trace_record 00:05:17.115 CXX test/cpp_headers/endian.o 00:05:17.115 CXX test/cpp_headers/env_dpdk.o 00:05:17.115 LINK stub 00:05:17.383 CXX test/cpp_headers/env.o 00:05:17.383 CXX test/cpp_headers/event.o 00:05:17.383 CXX test/cpp_headers/fd_group.o 00:05:17.383 CXX test/cpp_headers/fd.o 00:05:17.383 CXX test/cpp_headers/file.o 00:05:17.383 CXX test/cpp_headers/fsdev.o 00:05:17.383 CXX test/cpp_headers/fsdev_module.o 00:05:17.383 CXX test/cpp_headers/ftl.o 00:05:17.383 CXX test/cpp_headers/fuse_dispatcher.o 00:05:17.383 LINK iscsi_tgt 00:05:17.383 CXX test/cpp_headers/gpt_spec.o 00:05:17.383 CXX test/cpp_headers/hexlify.o 00:05:17.383 LINK verify 00:05:17.383 LINK ioat_perf 00:05:17.383 LINK bdev_svc 00:05:17.383 LINK spdk_tgt 00:05:17.383 CXX test/cpp_headers/histogram_data.o 00:05:17.383 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:17.383 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:17.384 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:17.384 LINK mem_callbacks 00:05:17.384 CXX test/cpp_headers/idxd.o 00:05:17.384 CXX test/cpp_headers/idxd_spec.o 00:05:17.384 CXX test/cpp_headers/init.o 00:05:17.384 CXX test/cpp_headers/ioat.o 00:05:17.384 CXX test/cpp_headers/ioat_spec.o 00:05:17.648 CXX test/cpp_headers/iscsi_spec.o 00:05:17.648 LINK spdk_dd 00:05:17.648 LINK spdk_trace 00:05:17.648 CXX test/cpp_headers/json.o 00:05:17.648 CXX test/cpp_headers/jsonrpc.o 00:05:17.648 CXX test/cpp_headers/keyring.o 00:05:17.648 CXX test/cpp_headers/keyring_module.o 00:05:17.648 CXX test/cpp_headers/likely.o 00:05:17.648 CXX test/cpp_headers/log.o 00:05:17.648 CXX test/cpp_headers/lvol.o 00:05:17.648 CXX test/cpp_headers/md5.o 00:05:17.648 CXX test/cpp_headers/memory.o 00:05:17.648 CXX test/cpp_headers/mmio.o 00:05:17.648 CXX test/cpp_headers/nbd.o 00:05:17.648 CXX test/cpp_headers/net.o 00:05:17.648 CXX test/cpp_headers/notify.o 00:05:17.648 LINK pci_ut 00:05:17.648 CXX test/cpp_headers/nvme.o 00:05:17.648 CXX test/cpp_headers/nvme_ocssd.o 00:05:17.648 CXX test/cpp_headers/nvme_intel.o 00:05:17.648 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:17.648 CXX test/cpp_headers/nvme_spec.o 00:05:17.913 CXX test/cpp_headers/nvme_zns.o 00:05:17.913 CXX test/cpp_headers/nvmf_cmd.o 00:05:17.913 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:17.913 CXX test/cpp_headers/nvmf.o 00:05:17.913 CC test/event/reactor/reactor.o 00:05:17.913 CC test/event/event_perf/event_perf.o 00:05:17.913 CC test/event/reactor_perf/reactor_perf.o 00:05:17.913 CXX test/cpp_headers/nvmf_spec.o 00:05:17.913 CXX test/cpp_headers/nvmf_transport.o 00:05:17.913 CC examples/sock/hello_world/hello_sock.o 00:05:17.913 CC examples/vmd/led/led.o 00:05:17.913 CC examples/vmd/lsvmd/lsvmd.o 00:05:17.913 CXX test/cpp_headers/opal.o 00:05:17.913 CXX test/cpp_headers/opal_spec.o 00:05:17.913 LINK nvme_fuzz 00:05:17.913 CXX test/cpp_headers/pci_ids.o 00:05:17.913 CC examples/thread/thread/thread_ex.o 00:05:17.913 CC test/event/app_repeat/app_repeat.o 00:05:17.913 CC examples/idxd/perf/perf.o 00:05:17.913 CC test/event/scheduler/scheduler.o 00:05:17.913 CXX test/cpp_headers/pipe.o 00:05:17.913 LINK test_dma 00:05:18.177 CXX test/cpp_headers/queue.o 00:05:18.178 CXX test/cpp_headers/reduce.o 00:05:18.178 CXX test/cpp_headers/rpc.o 00:05:18.178 CXX test/cpp_headers/scheduler.o 00:05:18.178 CXX test/cpp_headers/scsi.o 00:05:18.178 CXX test/cpp_headers/scsi_spec.o 00:05:18.178 CXX test/cpp_headers/sock.o 00:05:18.178 CXX test/cpp_headers/string.o 00:05:18.178 CXX test/cpp_headers/stdinc.o 00:05:18.178 CXX test/cpp_headers/thread.o 00:05:18.178 CXX test/cpp_headers/trace.o 00:05:18.178 CXX test/cpp_headers/trace_parser.o 00:05:18.178 LINK spdk_bdev 00:05:18.178 CXX test/cpp_headers/tree.o 00:05:18.178 LINK reactor 00:05:18.178 LINK reactor_perf 00:05:18.178 CXX test/cpp_headers/ublk.o 00:05:18.178 LINK event_perf 00:05:18.178 CXX test/cpp_headers/util.o 00:05:18.178 LINK lsvmd 00:05:18.178 CC app/vhost/vhost.o 00:05:18.178 LINK spdk_nvme 00:05:18.178 LINK led 00:05:18.178 CXX test/cpp_headers/uuid.o 00:05:18.178 LINK vhost_fuzz 00:05:18.178 CXX test/cpp_headers/version.o 00:05:18.178 CXX test/cpp_headers/vfio_user_pci.o 00:05:18.178 CXX test/cpp_headers/vfio_user_spec.o 00:05:18.178 CXX test/cpp_headers/vhost.o 00:05:18.178 CXX test/cpp_headers/vmd.o 00:05:18.178 CXX test/cpp_headers/xor.o 00:05:18.178 LINK app_repeat 00:05:18.178 CXX test/cpp_headers/zipf.o 00:05:18.438 LINK spdk_nvme_perf 00:05:18.438 LINK spdk_nvme_identify 00:05:18.438 LINK hello_sock 00:05:18.438 LINK memory_ut 00:05:18.438 LINK spdk_top 00:05:18.438 LINK thread 00:05:18.438 LINK scheduler 00:05:18.698 LINK vhost 00:05:18.698 LINK idxd_perf 00:05:18.698 CC test/nvme/reserve/reserve.o 00:05:18.698 CC test/nvme/startup/startup.o 00:05:18.698 CC test/nvme/connect_stress/connect_stress.o 00:05:18.698 CC test/nvme/boot_partition/boot_partition.o 00:05:18.698 CC test/nvme/e2edp/nvme_dp.o 00:05:18.698 CC test/nvme/reset/reset.o 00:05:18.698 CC test/nvme/aer/aer.o 00:05:18.698 CC test/nvme/cuse/cuse.o 00:05:18.698 CC test/nvme/compliance/nvme_compliance.o 00:05:18.698 CC test/nvme/simple_copy/simple_copy.o 00:05:18.698 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:18.698 CC test/nvme/fdp/fdp.o 00:05:18.698 CC test/nvme/err_injection/err_injection.o 00:05:18.698 CC test/nvme/overhead/overhead.o 00:05:18.698 CC test/nvme/fused_ordering/fused_ordering.o 00:05:18.698 CC test/nvme/sgl/sgl.o 00:05:18.698 CC test/blobfs/mkfs/mkfs.o 00:05:18.698 CC test/accel/dif/dif.o 00:05:18.698 CC test/lvol/esnap/esnap.o 00:05:18.958 CC examples/nvme/hello_world/hello_world.o 00:05:18.958 CC examples/nvme/hotplug/hotplug.o 00:05:18.958 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:18.958 CC examples/nvme/arbitration/arbitration.o 00:05:18.958 CC examples/nvme/reconnect/reconnect.o 00:05:18.958 CC examples/nvme/abort/abort.o 00:05:18.958 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:18.958 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:18.958 LINK boot_partition 00:05:18.958 LINK connect_stress 00:05:18.958 LINK startup 00:05:18.958 LINK doorbell_aers 00:05:18.958 LINK err_injection 00:05:18.958 LINK fused_ordering 00:05:18.958 CC examples/accel/perf/accel_perf.o 00:05:18.958 LINK simple_copy 00:05:18.958 LINK nvme_dp 00:05:18.958 LINK mkfs 00:05:18.958 LINK reset 00:05:18.958 CC examples/blob/cli/blobcli.o 00:05:18.958 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:18.958 CC examples/blob/hello_world/hello_blob.o 00:05:18.958 LINK reserve 00:05:18.958 LINK fdp 00:05:18.958 LINK aer 00:05:19.217 LINK overhead 00:05:19.217 LINK pmr_persistence 00:05:19.217 LINK nvme_compliance 00:05:19.217 LINK cmb_copy 00:05:19.217 LINK hotplug 00:05:19.217 LINK sgl 00:05:19.217 LINK hello_world 00:05:19.217 LINK arbitration 00:05:19.217 LINK hello_blob 00:05:19.217 LINK reconnect 00:05:19.476 LINK hello_fsdev 00:05:19.476 LINK abort 00:05:19.476 LINK dif 00:05:19.476 LINK accel_perf 00:05:19.476 LINK nvme_manage 00:05:19.735 LINK blobcli 00:05:19.735 LINK iscsi_fuzz 00:05:19.994 CC examples/bdev/hello_world/hello_bdev.o 00:05:19.994 CC test/bdev/bdevio/bdevio.o 00:05:19.994 CC examples/bdev/bdevperf/bdevperf.o 00:05:20.252 LINK hello_bdev 00:05:20.252 LINK bdevio 00:05:20.511 LINK cuse 00:05:20.769 LINK bdevperf 00:05:21.027 CC examples/nvmf/nvmf/nvmf.o 00:05:21.286 LINK nvmf 00:05:24.576 LINK esnap 00:05:24.576 00:05:24.576 real 1m7.244s 00:05:24.576 user 9m0.035s 00:05:24.576 sys 1m58.388s 00:05:24.576 16:11:14 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:24.576 16:11:14 make -- common/autotest_common.sh@10 -- $ set +x 00:05:24.576 ************************************ 00:05:24.576 END TEST make 00:05:24.576 ************************************ 00:05:24.576 16:11:14 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:24.576 16:11:14 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:24.576 16:11:14 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:24.576 16:11:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.576 16:11:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:24.576 16:11:14 -- pm/common@44 -- $ pid=5654 00:05:24.576 16:11:14 -- pm/common@50 -- $ kill -TERM 5654 00:05:24.576 16:11:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.576 16:11:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:24.576 16:11:14 -- pm/common@44 -- $ pid=5656 00:05:24.576 16:11:14 -- pm/common@50 -- $ kill -TERM 5656 00:05:24.576 16:11:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.576 16:11:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:24.576 16:11:14 -- pm/common@44 -- $ pid=5658 00:05:24.576 16:11:14 -- pm/common@50 -- $ kill -TERM 5658 00:05:24.576 16:11:14 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.576 16:11:14 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:24.576 16:11:14 -- pm/common@44 -- $ pid=5689 00:05:24.576 16:11:14 -- pm/common@50 -- $ sudo -E kill -TERM 5689 00:05:24.576 16:11:14 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:24.576 16:11:14 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:24.576 16:11:14 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:24.576 16:11:14 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:24.576 16:11:14 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:24.576 16:11:14 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:24.576 16:11:14 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.576 16:11:14 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.576 16:11:14 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.576 16:11:14 -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.576 16:11:14 -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.576 16:11:14 -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.576 16:11:14 -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.576 16:11:14 -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.576 16:11:14 -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.576 16:11:14 -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.576 16:11:14 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.576 16:11:14 -- scripts/common.sh@344 -- # case "$op" in 00:05:24.576 16:11:14 -- scripts/common.sh@345 -- # : 1 00:05:24.576 16:11:14 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.576 16:11:14 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.576 16:11:14 -- scripts/common.sh@365 -- # decimal 1 00:05:24.576 16:11:14 -- scripts/common.sh@353 -- # local d=1 00:05:24.576 16:11:14 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.576 16:11:14 -- scripts/common.sh@355 -- # echo 1 00:05:24.576 16:11:14 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.576 16:11:14 -- scripts/common.sh@366 -- # decimal 2 00:05:24.576 16:11:14 -- scripts/common.sh@353 -- # local d=2 00:05:24.576 16:11:14 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.576 16:11:14 -- scripts/common.sh@355 -- # echo 2 00:05:24.576 16:11:14 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.576 16:11:14 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.576 16:11:14 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.576 16:11:14 -- scripts/common.sh@368 -- # return 0 00:05:24.577 16:11:14 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.577 16:11:14 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:24.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.577 --rc genhtml_branch_coverage=1 00:05:24.577 --rc genhtml_function_coverage=1 00:05:24.577 --rc genhtml_legend=1 00:05:24.577 --rc geninfo_all_blocks=1 00:05:24.577 --rc geninfo_unexecuted_blocks=1 00:05:24.577 00:05:24.577 ' 00:05:24.577 16:11:14 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:24.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.577 --rc genhtml_branch_coverage=1 00:05:24.577 --rc genhtml_function_coverage=1 00:05:24.577 --rc genhtml_legend=1 00:05:24.577 --rc geninfo_all_blocks=1 00:05:24.577 --rc geninfo_unexecuted_blocks=1 00:05:24.577 00:05:24.577 ' 00:05:24.577 16:11:14 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:24.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.577 --rc genhtml_branch_coverage=1 00:05:24.577 --rc genhtml_function_coverage=1 00:05:24.577 --rc genhtml_legend=1 00:05:24.577 --rc geninfo_all_blocks=1 00:05:24.577 --rc geninfo_unexecuted_blocks=1 00:05:24.577 00:05:24.577 ' 00:05:24.577 16:11:14 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:24.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.577 --rc genhtml_branch_coverage=1 00:05:24.577 --rc genhtml_function_coverage=1 00:05:24.577 --rc genhtml_legend=1 00:05:24.577 --rc geninfo_all_blocks=1 00:05:24.577 --rc geninfo_unexecuted_blocks=1 00:05:24.577 00:05:24.577 ' 00:05:24.577 16:11:14 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.577 16:11:14 -- nvmf/common.sh@7 -- # uname -s 00:05:24.577 16:11:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.577 16:11:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.577 16:11:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.577 16:11:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.577 16:11:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.577 16:11:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.577 16:11:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.577 16:11:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.577 16:11:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.577 16:11:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.577 16:11:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.577 16:11:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:24.577 16:11:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.577 16:11:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.577 16:11:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:24.577 16:11:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.577 16:11:14 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.577 16:11:14 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.577 16:11:14 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.577 16:11:14 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.577 16:11:14 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.577 16:11:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.577 16:11:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.577 16:11:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.577 16:11:14 -- paths/export.sh@5 -- # export PATH 00:05:24.577 16:11:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.577 16:11:14 -- nvmf/common.sh@51 -- # : 0 00:05:24.577 16:11:14 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.577 16:11:14 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.577 16:11:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.577 16:11:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.577 16:11:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.577 16:11:14 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.577 16:11:14 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.577 16:11:14 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.577 16:11:14 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.577 16:11:14 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:24.577 16:11:14 -- spdk/autotest.sh@32 -- # uname -s 00:05:24.577 16:11:14 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:24.577 16:11:14 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:24.577 16:11:14 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:24.577 16:11:14 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:24.577 16:11:14 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:24.577 16:11:14 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:24.577 16:11:14 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:24.577 16:11:14 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:24.577 16:11:14 -- spdk/autotest.sh@48 -- # udevadm_pid=86498 00:05:24.577 16:11:14 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:24.577 16:11:14 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:24.577 16:11:14 -- pm/common@17 -- # local monitor 00:05:24.577 16:11:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.577 16:11:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.577 16:11:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.577 16:11:14 -- pm/common@21 -- # date +%s 00:05:24.577 16:11:14 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:24.577 16:11:14 -- pm/common@21 -- # date +%s 00:05:24.577 16:11:14 -- pm/common@25 -- # sleep 1 00:05:24.577 16:11:14 -- pm/common@21 -- # date +%s 00:05:24.577 16:11:14 -- pm/common@21 -- # date +%s 00:05:24.577 16:11:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732029074 00:05:24.577 16:11:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732029074 00:05:24.577 16:11:14 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732029074 00:05:24.577 16:11:14 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732029074 00:05:24.577 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732029074_collect-cpu-load.pm.log 00:05:24.577 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732029074_collect-vmstat.pm.log 00:05:24.578 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732029074_collect-cpu-temp.pm.log 00:05:24.839 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732029074_collect-bmc-pm.bmc.pm.log 00:05:25.779 16:11:15 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:25.779 16:11:15 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:25.779 16:11:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.779 16:11:15 -- common/autotest_common.sh@10 -- # set +x 00:05:25.779 16:11:15 -- spdk/autotest.sh@59 -- # create_test_list 00:05:25.779 16:11:15 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:25.779 16:11:15 -- common/autotest_common.sh@10 -- # set +x 00:05:25.779 16:11:15 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:25.779 16:11:15 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.779 16:11:15 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.779 16:11:15 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:25.779 16:11:15 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:25.779 16:11:15 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:25.779 16:11:15 -- common/autotest_common.sh@1457 -- # uname 00:05:25.779 16:11:15 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:25.779 16:11:15 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:25.779 16:11:15 -- common/autotest_common.sh@1477 -- # uname 00:05:25.779 16:11:15 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:25.779 16:11:15 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:25.779 16:11:15 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:25.779 lcov: LCOV version 1.15 00:05:25.779 16:11:16 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:52.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:52.366 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:06:04.600 16:11:52 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:06:04.600 16:11:52 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:04.600 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:06:04.600 16:11:52 -- spdk/autotest.sh@78 -- # rm -f 00:06:04.600 16:11:52 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:04.600 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:06:04.600 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:06:04.600 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:06:04.600 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:06:04.600 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:06:04.600 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:06:04.600 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:06:04.600 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:06:04.600 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:06:04.600 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:06:04.600 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:06:04.600 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:06:04.600 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:06:04.600 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:06:04.600 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:06:04.600 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:06:04.600 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:06:04.600 16:11:54 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:06:04.600 16:11:54 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:04.600 16:11:54 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:04.600 16:11:54 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:06:04.600 16:11:54 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:06:04.600 16:11:54 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:06:04.600 16:11:54 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:04.600 16:11:54 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:04.600 16:11:54 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:04.600 16:11:54 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:06:04.600 16:11:54 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:06:04.601 16:11:54 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:06:04.601 16:11:54 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:06:04.601 16:11:54 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:06:04.601 16:11:54 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:06:04.601 No valid GPT data, bailing 00:06:04.601 16:11:54 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:06:04.601 16:11:54 -- scripts/common.sh@394 -- # pt= 00:06:04.601 16:11:54 -- scripts/common.sh@395 -- # return 1 00:06:04.601 16:11:54 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:06:04.601 1+0 records in 00:06:04.601 1+0 records out 00:06:04.601 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00219024 s, 479 MB/s 00:06:04.601 16:11:54 -- spdk/autotest.sh@105 -- # sync 00:06:04.601 16:11:54 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:06:04.601 16:11:54 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:06:04.601 16:11:54 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:06:06.516 16:11:56 -- spdk/autotest.sh@111 -- # uname -s 00:06:06.516 16:11:56 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:06:06.516 16:11:56 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:06:06.516 16:11:56 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:06:07.456 Hugepages 00:06:07.457 node hugesize free / total 00:06:07.457 node0 1048576kB 0 / 0 00:06:07.457 node0 2048kB 0 / 0 00:06:07.457 node1 1048576kB 0 / 0 00:06:07.457 node1 2048kB 0 / 0 00:06:07.457 00:06:07.457 Type BDF Vendor Device NUMA Driver Device Block devices 00:06:07.457 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:06:07.457 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:06:07.717 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:06:07.717 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:06:07.717 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:06:07.717 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:06:07.717 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:06:07.717 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:06:07.717 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:06:07.717 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:06:07.717 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:06:07.717 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:06:07.717 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:06:07.717 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:06:07.717 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:06:07.717 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:06:07.717 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:06:07.717 16:11:57 -- spdk/autotest.sh@117 -- # uname -s 00:06:07.717 16:11:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:06:07.717 16:11:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:06:07.717 16:11:57 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:09.103 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:09.103 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:09.103 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:09.103 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:09.103 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:09.103 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:09.103 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:09.103 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:09.103 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:09.103 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:09.103 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:09.103 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:09.103 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:09.103 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:09.103 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:09.103 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:10.045 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:06:10.045 16:12:00 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:10.986 16:12:01 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:10.986 16:12:01 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:10.986 16:12:01 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:10.986 16:12:01 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:10.986 16:12:01 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:10.986 16:12:01 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:10.986 16:12:01 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:10.986 16:12:01 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:10.986 16:12:01 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:10.986 16:12:01 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:10.986 16:12:01 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:06:10.986 16:12:01 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:12.372 Waiting for block devices as requested 00:06:12.372 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:06:12.372 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:12.372 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:12.633 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:12.633 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:12.633 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:12.894 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:12.894 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:12.894 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:12.894 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:06:13.154 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:06:13.154 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:06:13.154 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:06:13.415 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:06:13.415 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:06:13.415 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:06:13.415 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:06:13.676 16:12:03 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:13.676 16:12:03 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:06:13.676 16:12:03 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:13.676 16:12:03 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:06:13.676 16:12:03 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:06:13.676 16:12:03 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:06:13.676 16:12:03 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:06:13.676 16:12:03 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:13.676 16:12:03 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:13.676 16:12:03 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:13.676 16:12:03 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:13.676 16:12:03 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:13.676 16:12:03 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:13.676 16:12:03 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:06:13.676 16:12:03 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:13.676 16:12:03 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:13.676 16:12:03 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:13.676 16:12:03 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:13.676 16:12:03 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:13.676 16:12:03 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:13.676 16:12:03 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:13.676 16:12:03 -- common/autotest_common.sh@1543 -- # continue 00:06:13.676 16:12:03 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:13.676 16:12:03 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:13.676 16:12:03 -- common/autotest_common.sh@10 -- # set +x 00:06:13.676 16:12:03 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:13.676 16:12:03 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:13.676 16:12:03 -- common/autotest_common.sh@10 -- # set +x 00:06:13.676 16:12:03 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:15.064 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:15.064 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:15.064 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:15.064 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:15.064 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:15.064 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:15.064 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:15.064 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:15.064 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:06:15.064 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:06:15.064 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:06:15.064 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:06:15.064 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:06:15.064 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:06:15.064 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:06:15.064 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:06:16.007 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:06:16.267 16:12:06 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:16.267 16:12:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:16.267 16:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:16.267 16:12:06 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:16.267 16:12:06 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:16.267 16:12:06 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:16.267 16:12:06 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:16.267 16:12:06 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:16.267 16:12:06 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:16.267 16:12:06 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:16.267 16:12:06 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:16.267 16:12:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:16.267 16:12:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:16.267 16:12:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:16.267 16:12:06 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:16.267 16:12:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:16.267 16:12:06 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:16.267 16:12:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:06:16.267 16:12:06 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:16.267 16:12:06 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:06:16.267 16:12:06 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:16.267 16:12:06 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:16.267 16:12:06 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:16.267 16:12:06 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:16.267 16:12:06 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:88:00.0 00:06:16.267 16:12:06 -- common/autotest_common.sh@1579 -- # [[ -z 0000:88:00.0 ]] 00:06:16.267 16:12:06 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=97178 00:06:16.267 16:12:06 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:16.267 16:12:06 -- common/autotest_common.sh@1585 -- # waitforlisten 97178 00:06:16.267 16:12:06 -- common/autotest_common.sh@835 -- # '[' -z 97178 ']' 00:06:16.267 16:12:06 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.267 16:12:06 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:16.267 16:12:06 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.267 16:12:06 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:16.267 16:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:16.267 [2024-11-19 16:12:06.532134] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:06:16.267 [2024-11-19 16:12:06.532229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97178 ] 00:06:16.267 [2024-11-19 16:12:06.600142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.526 [2024-11-19 16:12:06.644885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.785 16:12:06 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.785 16:12:06 -- common/autotest_common.sh@868 -- # return 0 00:06:16.785 16:12:06 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:16.785 16:12:06 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:16.785 16:12:06 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:06:20.075 nvme0n1 00:06:20.075 16:12:10 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:20.075 [2024-11-19 16:12:10.261949] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:06:20.075 [2024-11-19 16:12:10.262000] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:06:20.075 request: 00:06:20.075 { 00:06:20.075 "nvme_ctrlr_name": "nvme0", 00:06:20.075 "password": "test", 00:06:20.075 "method": "bdev_nvme_opal_revert", 00:06:20.075 "req_id": 1 00:06:20.075 } 00:06:20.075 Got JSON-RPC error response 00:06:20.075 response: 00:06:20.075 { 00:06:20.075 "code": -32603, 00:06:20.075 "message": "Internal error" 00:06:20.075 } 00:06:20.075 16:12:10 -- common/autotest_common.sh@1591 -- # true 00:06:20.075 16:12:10 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:20.075 16:12:10 -- common/autotest_common.sh@1595 -- # killprocess 97178 00:06:20.075 16:12:10 -- common/autotest_common.sh@954 -- # '[' -z 97178 ']' 00:06:20.075 16:12:10 -- common/autotest_common.sh@958 -- # kill -0 97178 00:06:20.075 16:12:10 -- common/autotest_common.sh@959 -- # uname 00:06:20.075 16:12:10 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.075 16:12:10 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 97178 00:06:20.075 16:12:10 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.075 16:12:10 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.075 16:12:10 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 97178' 00:06:20.075 killing process with pid 97178 00:06:20.075 16:12:10 -- common/autotest_common.sh@973 -- # kill 97178 00:06:20.075 16:12:10 -- common/autotest_common.sh@978 -- # wait 97178 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.075 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.076 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.076 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.076 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:20.335 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:21.713 16:12:12 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:21.972 16:12:12 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:21.972 16:12:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:21.972 16:12:12 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:21.972 16:12:12 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:21.972 16:12:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.972 16:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:21.972 16:12:12 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:21.972 16:12:12 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:21.972 16:12:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.972 16:12:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.972 16:12:12 -- common/autotest_common.sh@10 -- # set +x 00:06:21.972 ************************************ 00:06:21.972 START TEST env 00:06:21.972 ************************************ 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:21.972 * Looking for test storage... 00:06:21.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.972 16:12:12 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.972 16:12:12 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.972 16:12:12 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.972 16:12:12 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.972 16:12:12 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.972 16:12:12 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.972 16:12:12 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.972 16:12:12 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.972 16:12:12 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.972 16:12:12 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.972 16:12:12 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.972 16:12:12 env -- scripts/common.sh@344 -- # case "$op" in 00:06:21.972 16:12:12 env -- scripts/common.sh@345 -- # : 1 00:06:21.972 16:12:12 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.972 16:12:12 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.972 16:12:12 env -- scripts/common.sh@365 -- # decimal 1 00:06:21.972 16:12:12 env -- scripts/common.sh@353 -- # local d=1 00:06:21.972 16:12:12 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.972 16:12:12 env -- scripts/common.sh@355 -- # echo 1 00:06:21.972 16:12:12 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.972 16:12:12 env -- scripts/common.sh@366 -- # decimal 2 00:06:21.972 16:12:12 env -- scripts/common.sh@353 -- # local d=2 00:06:21.972 16:12:12 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.972 16:12:12 env -- scripts/common.sh@355 -- # echo 2 00:06:21.972 16:12:12 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.972 16:12:12 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.972 16:12:12 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.972 16:12:12 env -- scripts/common.sh@368 -- # return 0 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.972 --rc genhtml_branch_coverage=1 00:06:21.972 --rc genhtml_function_coverage=1 00:06:21.972 --rc genhtml_legend=1 00:06:21.972 --rc geninfo_all_blocks=1 00:06:21.972 --rc geninfo_unexecuted_blocks=1 00:06:21.972 00:06:21.972 ' 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.972 --rc genhtml_branch_coverage=1 00:06:21.972 --rc genhtml_function_coverage=1 00:06:21.972 --rc genhtml_legend=1 00:06:21.972 --rc geninfo_all_blocks=1 00:06:21.972 --rc geninfo_unexecuted_blocks=1 00:06:21.972 00:06:21.972 ' 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.972 --rc genhtml_branch_coverage=1 00:06:21.972 --rc genhtml_function_coverage=1 00:06:21.972 --rc genhtml_legend=1 00:06:21.972 --rc geninfo_all_blocks=1 00:06:21.972 --rc geninfo_unexecuted_blocks=1 00:06:21.972 00:06:21.972 ' 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.972 --rc genhtml_branch_coverage=1 00:06:21.972 --rc genhtml_function_coverage=1 00:06:21.972 --rc genhtml_legend=1 00:06:21.972 --rc geninfo_all_blocks=1 00:06:21.972 --rc geninfo_unexecuted_blocks=1 00:06:21.972 00:06:21.972 ' 00:06:21.972 16:12:12 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.972 16:12:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.972 16:12:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:21.972 ************************************ 00:06:21.972 START TEST env_memory 00:06:21.972 ************************************ 00:06:21.972 16:12:12 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:21.972 00:06:21.972 00:06:21.972 CUnit - A unit testing framework for C - Version 2.1-3 00:06:21.972 http://cunit.sourceforge.net/ 00:06:21.972 00:06:21.972 00:06:21.972 Suite: memory 00:06:21.973 Test: alloc and free memory map ...[2024-11-19 16:12:12.292585] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:21.973 passed 00:06:22.232 Test: mem map translation ...[2024-11-19 16:12:12.315381] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:22.232 [2024-11-19 16:12:12.315402] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:22.232 [2024-11-19 16:12:12.315456] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:22.232 [2024-11-19 16:12:12.315468] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:22.232 passed 00:06:22.232 Test: mem map registration ...[2024-11-19 16:12:12.361983] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:22.232 [2024-11-19 16:12:12.362002] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:22.232 passed 00:06:22.232 Test: mem map adjacent registrations ...passed 00:06:22.232 00:06:22.232 Run Summary: Type Total Ran Passed Failed Inactive 00:06:22.232 suites 1 1 n/a 0 0 00:06:22.232 tests 4 4 4 0 0 00:06:22.232 asserts 152 152 152 0 n/a 00:06:22.232 00:06:22.232 Elapsed time = 0.151 seconds 00:06:22.232 00:06:22.232 real 0m0.160s 00:06:22.232 user 0m0.151s 00:06:22.232 sys 0m0.008s 00:06:22.232 16:12:12 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.232 16:12:12 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:22.232 ************************************ 00:06:22.232 END TEST env_memory 00:06:22.232 ************************************ 00:06:22.232 16:12:12 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:22.232 16:12:12 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:22.232 16:12:12 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:22.232 16:12:12 env -- common/autotest_common.sh@10 -- # set +x 00:06:22.232 ************************************ 00:06:22.232 START TEST env_vtophys 00:06:22.232 ************************************ 00:06:22.232 16:12:12 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:22.232 EAL: lib.eal log level changed from notice to debug 00:06:22.232 EAL: Detected lcore 0 as core 0 on socket 0 00:06:22.232 EAL: Detected lcore 1 as core 1 on socket 0 00:06:22.232 EAL: Detected lcore 2 as core 2 on socket 0 00:06:22.232 EAL: Detected lcore 3 as core 3 on socket 0 00:06:22.232 EAL: Detected lcore 4 as core 4 on socket 0 00:06:22.232 EAL: Detected lcore 5 as core 5 on socket 0 00:06:22.232 EAL: Detected lcore 6 as core 8 on socket 0 00:06:22.232 EAL: Detected lcore 7 as core 9 on socket 0 00:06:22.232 EAL: Detected lcore 8 as core 10 on socket 0 00:06:22.232 EAL: Detected lcore 9 as core 11 on socket 0 00:06:22.232 EAL: Detected lcore 10 as core 12 on socket 0 00:06:22.232 EAL: Detected lcore 11 as core 13 on socket 0 00:06:22.232 EAL: Detected lcore 12 as core 0 on socket 1 00:06:22.232 EAL: Detected lcore 13 as core 1 on socket 1 00:06:22.232 EAL: Detected lcore 14 as core 2 on socket 1 00:06:22.232 EAL: Detected lcore 15 as core 3 on socket 1 00:06:22.232 EAL: Detected lcore 16 as core 4 on socket 1 00:06:22.232 EAL: Detected lcore 17 as core 5 on socket 1 00:06:22.232 EAL: Detected lcore 18 as core 8 on socket 1 00:06:22.232 EAL: Detected lcore 19 as core 9 on socket 1 00:06:22.232 EAL: Detected lcore 20 as core 10 on socket 1 00:06:22.232 EAL: Detected lcore 21 as core 11 on socket 1 00:06:22.232 EAL: Detected lcore 22 as core 12 on socket 1 00:06:22.232 EAL: Detected lcore 23 as core 13 on socket 1 00:06:22.232 EAL: Detected lcore 24 as core 0 on socket 0 00:06:22.232 EAL: Detected lcore 25 as core 1 on socket 0 00:06:22.232 EAL: Detected lcore 26 as core 2 on socket 0 00:06:22.232 EAL: Detected lcore 27 as core 3 on socket 0 00:06:22.232 EAL: Detected lcore 28 as core 4 on socket 0 00:06:22.232 EAL: Detected lcore 29 as core 5 on socket 0 00:06:22.232 EAL: Detected lcore 30 as core 8 on socket 0 00:06:22.232 EAL: Detected lcore 31 as core 9 on socket 0 00:06:22.232 EAL: Detected lcore 32 as core 10 on socket 0 00:06:22.232 EAL: Detected lcore 33 as core 11 on socket 0 00:06:22.232 EAL: Detected lcore 34 as core 12 on socket 0 00:06:22.232 EAL: Detected lcore 35 as core 13 on socket 0 00:06:22.232 EAL: Detected lcore 36 as core 0 on socket 1 00:06:22.232 EAL: Detected lcore 37 as core 1 on socket 1 00:06:22.232 EAL: Detected lcore 38 as core 2 on socket 1 00:06:22.232 EAL: Detected lcore 39 as core 3 on socket 1 00:06:22.232 EAL: Detected lcore 40 as core 4 on socket 1 00:06:22.232 EAL: Detected lcore 41 as core 5 on socket 1 00:06:22.232 EAL: Detected lcore 42 as core 8 on socket 1 00:06:22.232 EAL: Detected lcore 43 as core 9 on socket 1 00:06:22.232 EAL: Detected lcore 44 as core 10 on socket 1 00:06:22.232 EAL: Detected lcore 45 as core 11 on socket 1 00:06:22.232 EAL: Detected lcore 46 as core 12 on socket 1 00:06:22.232 EAL: Detected lcore 47 as core 13 on socket 1 00:06:22.232 EAL: Maximum logical cores by configuration: 128 00:06:22.232 EAL: Detected CPU lcores: 48 00:06:22.232 EAL: Detected NUMA nodes: 2 00:06:22.232 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:22.232 EAL: Detected shared linkage of DPDK 00:06:22.232 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:22.232 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:22.232 EAL: Registered [vdev] bus. 00:06:22.232 EAL: bus.vdev log level changed from disabled to notice 00:06:22.233 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:22.233 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:22.233 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:22.233 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:22.233 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:22.233 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:22.233 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:22.233 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:22.233 EAL: No shared files mode enabled, IPC will be disabled 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Bus pci wants IOVA as 'DC' 00:06:22.233 EAL: Bus vdev wants IOVA as 'DC' 00:06:22.233 EAL: Buses did not request a specific IOVA mode. 00:06:22.233 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:22.233 EAL: Selected IOVA mode 'VA' 00:06:22.233 EAL: Probing VFIO support... 00:06:22.233 EAL: IOMMU type 1 (Type 1) is supported 00:06:22.233 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:22.233 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:22.233 EAL: VFIO support initialized 00:06:22.233 EAL: Ask a virtual area of 0x2e000 bytes 00:06:22.233 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:22.233 EAL: Setting up physically contiguous memory... 00:06:22.233 EAL: Setting maximum number of open files to 524288 00:06:22.233 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:22.233 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:22.233 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:22.233 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.233 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:22.233 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:22.233 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.233 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:22.233 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:22.233 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.233 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:22.233 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:22.233 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.233 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:22.233 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:22.233 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.233 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:22.233 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:22.233 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.233 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:22.233 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:22.233 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.233 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:22.233 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:22.233 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.233 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:22.233 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:22.233 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:22.233 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.233 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:22.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:22.233 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.233 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:22.233 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:22.233 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.233 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:22.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:22.233 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.233 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:22.233 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:22.233 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.233 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:22.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:22.233 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.233 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:22.233 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:22.233 EAL: Ask a virtual area of 0x61000 bytes 00:06:22.233 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:22.233 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:22.233 EAL: Ask a virtual area of 0x400000000 bytes 00:06:22.233 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:22.233 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:22.233 EAL: Hugepages will be freed exactly as allocated. 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: TSC frequency is ~2700000 KHz 00:06:22.233 EAL: Main lcore 0 is ready (tid=7f7e32f63a00;cpuset=[0]) 00:06:22.233 EAL: Trying to obtain current memory policy. 00:06:22.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.233 EAL: Restoring previous memory policy: 0 00:06:22.233 EAL: request: mp_malloc_sync 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Heap on socket 0 was expanded by 2MB 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:22.233 EAL: Mem event callback 'spdk:(nil)' registered 00:06:22.233 00:06:22.233 00:06:22.233 CUnit - A unit testing framework for C - Version 2.1-3 00:06:22.233 http://cunit.sourceforge.net/ 00:06:22.233 00:06:22.233 00:06:22.233 Suite: components_suite 00:06:22.233 Test: vtophys_malloc_test ...passed 00:06:22.233 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:22.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.233 EAL: Restoring previous memory policy: 4 00:06:22.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.233 EAL: request: mp_malloc_sync 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Heap on socket 0 was expanded by 4MB 00:06:22.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.233 EAL: request: mp_malloc_sync 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Heap on socket 0 was shrunk by 4MB 00:06:22.233 EAL: Trying to obtain current memory policy. 00:06:22.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.233 EAL: Restoring previous memory policy: 4 00:06:22.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.233 EAL: request: mp_malloc_sync 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Heap on socket 0 was expanded by 6MB 00:06:22.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.233 EAL: request: mp_malloc_sync 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Heap on socket 0 was shrunk by 6MB 00:06:22.233 EAL: Trying to obtain current memory policy. 00:06:22.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.233 EAL: Restoring previous memory policy: 4 00:06:22.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.233 EAL: request: mp_malloc_sync 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Heap on socket 0 was expanded by 10MB 00:06:22.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.233 EAL: request: mp_malloc_sync 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Heap on socket 0 was shrunk by 10MB 00:06:22.233 EAL: Trying to obtain current memory policy. 00:06:22.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.233 EAL: Restoring previous memory policy: 4 00:06:22.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.233 EAL: request: mp_malloc_sync 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Heap on socket 0 was expanded by 18MB 00:06:22.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.233 EAL: request: mp_malloc_sync 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Heap on socket 0 was shrunk by 18MB 00:06:22.233 EAL: Trying to obtain current memory policy. 00:06:22.233 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.233 EAL: Restoring previous memory policy: 4 00:06:22.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.233 EAL: request: mp_malloc_sync 00:06:22.233 EAL: No shared files mode enabled, IPC is disabled 00:06:22.233 EAL: Heap on socket 0 was expanded by 34MB 00:06:22.233 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.492 EAL: request: mp_malloc_sync 00:06:22.492 EAL: No shared files mode enabled, IPC is disabled 00:06:22.492 EAL: Heap on socket 0 was shrunk by 34MB 00:06:22.492 EAL: Trying to obtain current memory policy. 00:06:22.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.492 EAL: Restoring previous memory policy: 4 00:06:22.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.492 EAL: request: mp_malloc_sync 00:06:22.492 EAL: No shared files mode enabled, IPC is disabled 00:06:22.492 EAL: Heap on socket 0 was expanded by 66MB 00:06:22.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.492 EAL: request: mp_malloc_sync 00:06:22.492 EAL: No shared files mode enabled, IPC is disabled 00:06:22.492 EAL: Heap on socket 0 was shrunk by 66MB 00:06:22.492 EAL: Trying to obtain current memory policy. 00:06:22.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.492 EAL: Restoring previous memory policy: 4 00:06:22.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.492 EAL: request: mp_malloc_sync 00:06:22.492 EAL: No shared files mode enabled, IPC is disabled 00:06:22.492 EAL: Heap on socket 0 was expanded by 130MB 00:06:22.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.492 EAL: request: mp_malloc_sync 00:06:22.492 EAL: No shared files mode enabled, IPC is disabled 00:06:22.492 EAL: Heap on socket 0 was shrunk by 130MB 00:06:22.492 EAL: Trying to obtain current memory policy. 00:06:22.492 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.492 EAL: Restoring previous memory policy: 4 00:06:22.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.492 EAL: request: mp_malloc_sync 00:06:22.492 EAL: No shared files mode enabled, IPC is disabled 00:06:22.492 EAL: Heap on socket 0 was expanded by 258MB 00:06:22.492 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.751 EAL: request: mp_malloc_sync 00:06:22.751 EAL: No shared files mode enabled, IPC is disabled 00:06:22.751 EAL: Heap on socket 0 was shrunk by 258MB 00:06:22.751 EAL: Trying to obtain current memory policy. 00:06:22.751 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:22.751 EAL: Restoring previous memory policy: 4 00:06:22.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:22.751 EAL: request: mp_malloc_sync 00:06:22.751 EAL: No shared files mode enabled, IPC is disabled 00:06:22.751 EAL: Heap on socket 0 was expanded by 514MB 00:06:22.751 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.010 EAL: request: mp_malloc_sync 00:06:23.010 EAL: No shared files mode enabled, IPC is disabled 00:06:23.010 EAL: Heap on socket 0 was shrunk by 514MB 00:06:23.010 EAL: Trying to obtain current memory policy. 00:06:23.010 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:23.267 EAL: Restoring previous memory policy: 4 00:06:23.267 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.267 EAL: request: mp_malloc_sync 00:06:23.267 EAL: No shared files mode enabled, IPC is disabled 00:06:23.267 EAL: Heap on socket 0 was expanded by 1026MB 00:06:23.526 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.786 EAL: request: mp_malloc_sync 00:06:23.786 EAL: No shared files mode enabled, IPC is disabled 00:06:23.786 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:23.786 passed 00:06:23.786 00:06:23.786 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.786 suites 1 1 n/a 0 0 00:06:23.786 tests 2 2 2 0 0 00:06:23.786 asserts 497 497 497 0 n/a 00:06:23.786 00:06:23.786 Elapsed time = 1.300 seconds 00:06:23.786 EAL: Calling mem event callback 'spdk:(nil)' 00:06:23.786 EAL: request: mp_malloc_sync 00:06:23.786 EAL: No shared files mode enabled, IPC is disabled 00:06:23.786 EAL: Heap on socket 0 was shrunk by 2MB 00:06:23.786 EAL: No shared files mode enabled, IPC is disabled 00:06:23.786 EAL: No shared files mode enabled, IPC is disabled 00:06:23.786 EAL: No shared files mode enabled, IPC is disabled 00:06:23.786 00:06:23.786 real 0m1.418s 00:06:23.786 user 0m0.838s 00:06:23.786 sys 0m0.545s 00:06:23.786 16:12:13 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.786 16:12:13 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:23.786 ************************************ 00:06:23.786 END TEST env_vtophys 00:06:23.786 ************************************ 00:06:23.786 16:12:13 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:23.786 16:12:13 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.786 16:12:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.786 16:12:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:23.786 ************************************ 00:06:23.786 START TEST env_pci 00:06:23.786 ************************************ 00:06:23.786 16:12:13 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:23.786 00:06:23.786 00:06:23.786 CUnit - A unit testing framework for C - Version 2.1-3 00:06:23.786 http://cunit.sourceforge.net/ 00:06:23.786 00:06:23.786 00:06:23.786 Suite: pci 00:06:23.786 Test: pci_hook ...[2024-11-19 16:12:13.943590] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 98587 has claimed it 00:06:23.786 EAL: Cannot find device (10000:00:01.0) 00:06:23.786 EAL: Failed to attach device on primary process 00:06:23.786 passed 00:06:23.786 00:06:23.786 Run Summary: Type Total Ran Passed Failed Inactive 00:06:23.786 suites 1 1 n/a 0 0 00:06:23.786 tests 1 1 1 0 0 00:06:23.786 asserts 25 25 25 0 n/a 00:06:23.786 00:06:23.786 Elapsed time = 0.020 seconds 00:06:23.786 00:06:23.786 real 0m0.032s 00:06:23.786 user 0m0.008s 00:06:23.786 sys 0m0.023s 00:06:23.786 16:12:13 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.786 16:12:13 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:23.786 ************************************ 00:06:23.786 END TEST env_pci 00:06:23.786 ************************************ 00:06:23.786 16:12:13 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:23.786 16:12:13 env -- env/env.sh@15 -- # uname 00:06:23.786 16:12:13 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:23.786 16:12:13 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:23.786 16:12:13 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:23.786 16:12:13 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:23.786 16:12:13 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.786 16:12:13 env -- common/autotest_common.sh@10 -- # set +x 00:06:23.786 ************************************ 00:06:23.786 START TEST env_dpdk_post_init 00:06:23.786 ************************************ 00:06:23.786 16:12:14 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:23.786 EAL: Detected CPU lcores: 48 00:06:23.786 EAL: Detected NUMA nodes: 2 00:06:23.786 EAL: Detected shared linkage of DPDK 00:06:23.786 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:23.786 EAL: Selected IOVA mode 'VA' 00:06:23.786 EAL: VFIO support initialized 00:06:23.786 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:23.786 EAL: Using IOMMU type 1 (Type 1) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:06:24.046 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:06:24.986 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:06:28.271 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:06:28.271 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:06:28.271 Starting DPDK initialization... 00:06:28.271 Starting SPDK post initialization... 00:06:28.271 SPDK NVMe probe 00:06:28.271 Attaching to 0000:88:00.0 00:06:28.271 Attached to 0000:88:00.0 00:06:28.271 Cleaning up... 00:06:28.271 00:06:28.271 real 0m4.379s 00:06:28.271 user 0m3.247s 00:06:28.271 sys 0m0.199s 00:06:28.271 16:12:18 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.271 16:12:18 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:28.271 ************************************ 00:06:28.271 END TEST env_dpdk_post_init 00:06:28.271 ************************************ 00:06:28.271 16:12:18 env -- env/env.sh@26 -- # uname 00:06:28.271 16:12:18 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:28.271 16:12:18 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:28.271 16:12:18 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.271 16:12:18 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.271 16:12:18 env -- common/autotest_common.sh@10 -- # set +x 00:06:28.271 ************************************ 00:06:28.271 START TEST env_mem_callbacks 00:06:28.271 ************************************ 00:06:28.271 16:12:18 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:28.271 EAL: Detected CPU lcores: 48 00:06:28.271 EAL: Detected NUMA nodes: 2 00:06:28.271 EAL: Detected shared linkage of DPDK 00:06:28.271 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:28.271 EAL: Selected IOVA mode 'VA' 00:06:28.271 EAL: VFIO support initialized 00:06:28.271 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:28.271 00:06:28.271 00:06:28.271 CUnit - A unit testing framework for C - Version 2.1-3 00:06:28.271 http://cunit.sourceforge.net/ 00:06:28.271 00:06:28.271 00:06:28.271 Suite: memory 00:06:28.271 Test: test ... 00:06:28.271 register 0x200000200000 2097152 00:06:28.271 malloc 3145728 00:06:28.271 register 0x200000400000 4194304 00:06:28.271 buf 0x200000500000 len 3145728 PASSED 00:06:28.271 malloc 64 00:06:28.271 buf 0x2000004fff40 len 64 PASSED 00:06:28.271 malloc 4194304 00:06:28.271 register 0x200000800000 6291456 00:06:28.271 buf 0x200000a00000 len 4194304 PASSED 00:06:28.271 free 0x200000500000 3145728 00:06:28.271 free 0x2000004fff40 64 00:06:28.271 unregister 0x200000400000 4194304 PASSED 00:06:28.271 free 0x200000a00000 4194304 00:06:28.271 unregister 0x200000800000 6291456 PASSED 00:06:28.271 malloc 8388608 00:06:28.271 register 0x200000400000 10485760 00:06:28.271 buf 0x200000600000 len 8388608 PASSED 00:06:28.271 free 0x200000600000 8388608 00:06:28.271 unregister 0x200000400000 10485760 PASSED 00:06:28.271 passed 00:06:28.271 00:06:28.271 Run Summary: Type Total Ran Passed Failed Inactive 00:06:28.271 suites 1 1 n/a 0 0 00:06:28.271 tests 1 1 1 0 0 00:06:28.271 asserts 15 15 15 0 n/a 00:06:28.271 00:06:28.271 Elapsed time = 0.005 seconds 00:06:28.271 00:06:28.271 real 0m0.048s 00:06:28.271 user 0m0.008s 00:06:28.271 sys 0m0.040s 00:06:28.271 16:12:18 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.271 16:12:18 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:28.271 ************************************ 00:06:28.271 END TEST env_mem_callbacks 00:06:28.271 ************************************ 00:06:28.271 00:06:28.271 real 0m6.434s 00:06:28.271 user 0m4.451s 00:06:28.271 sys 0m1.036s 00:06:28.271 16:12:18 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.271 16:12:18 env -- common/autotest_common.sh@10 -- # set +x 00:06:28.271 ************************************ 00:06:28.271 END TEST env 00:06:28.271 ************************************ 00:06:28.271 16:12:18 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:28.271 16:12:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:28.271 16:12:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.272 16:12:18 -- common/autotest_common.sh@10 -- # set +x 00:06:28.272 ************************************ 00:06:28.272 START TEST rpc 00:06:28.272 ************************************ 00:06:28.272 16:12:18 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:28.531 * Looking for test storage... 00:06:28.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:28.531 16:12:18 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:28.531 16:12:18 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:28.531 16:12:18 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:28.531 16:12:18 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:28.531 16:12:18 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.531 16:12:18 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.531 16:12:18 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.531 16:12:18 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.531 16:12:18 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.531 16:12:18 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.531 16:12:18 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.531 16:12:18 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.531 16:12:18 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.531 16:12:18 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.531 16:12:18 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.531 16:12:18 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:28.531 16:12:18 rpc -- scripts/common.sh@345 -- # : 1 00:06:28.531 16:12:18 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.531 16:12:18 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.531 16:12:18 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:28.531 16:12:18 rpc -- scripts/common.sh@353 -- # local d=1 00:06:28.531 16:12:18 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.531 16:12:18 rpc -- scripts/common.sh@355 -- # echo 1 00:06:28.531 16:12:18 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.531 16:12:18 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:28.531 16:12:18 rpc -- scripts/common.sh@353 -- # local d=2 00:06:28.531 16:12:18 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.531 16:12:18 rpc -- scripts/common.sh@355 -- # echo 2 00:06:28.531 16:12:18 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.531 16:12:18 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.531 16:12:18 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.531 16:12:18 rpc -- scripts/common.sh@368 -- # return 0 00:06:28.531 16:12:18 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.531 16:12:18 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:28.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.531 --rc genhtml_branch_coverage=1 00:06:28.532 --rc genhtml_function_coverage=1 00:06:28.532 --rc genhtml_legend=1 00:06:28.532 --rc geninfo_all_blocks=1 00:06:28.532 --rc geninfo_unexecuted_blocks=1 00:06:28.532 00:06:28.532 ' 00:06:28.532 16:12:18 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.532 --rc genhtml_branch_coverage=1 00:06:28.532 --rc genhtml_function_coverage=1 00:06:28.532 --rc genhtml_legend=1 00:06:28.532 --rc geninfo_all_blocks=1 00:06:28.532 --rc geninfo_unexecuted_blocks=1 00:06:28.532 00:06:28.532 ' 00:06:28.532 16:12:18 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.532 --rc genhtml_branch_coverage=1 00:06:28.532 --rc genhtml_function_coverage=1 00:06:28.532 --rc genhtml_legend=1 00:06:28.532 --rc geninfo_all_blocks=1 00:06:28.532 --rc geninfo_unexecuted_blocks=1 00:06:28.532 00:06:28.532 ' 00:06:28.532 16:12:18 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:28.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.532 --rc genhtml_branch_coverage=1 00:06:28.532 --rc genhtml_function_coverage=1 00:06:28.532 --rc genhtml_legend=1 00:06:28.532 --rc geninfo_all_blocks=1 00:06:28.532 --rc geninfo_unexecuted_blocks=1 00:06:28.532 00:06:28.532 ' 00:06:28.532 16:12:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=99288 00:06:28.532 16:12:18 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:28.532 16:12:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.532 16:12:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 99288 00:06:28.532 16:12:18 rpc -- common/autotest_common.sh@835 -- # '[' -z 99288 ']' 00:06:28.532 16:12:18 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.532 16:12:18 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:28.532 16:12:18 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.532 16:12:18 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:28.532 16:12:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:28.532 [2024-11-19 16:12:18.772810] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:06:28.532 [2024-11-19 16:12:18.772906] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99288 ] 00:06:28.532 [2024-11-19 16:12:18.839977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.791 [2024-11-19 16:12:18.885792] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:28.791 [2024-11-19 16:12:18.885858] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 99288' to capture a snapshot of events at runtime. 00:06:28.791 [2024-11-19 16:12:18.885870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.791 [2024-11-19 16:12:18.885881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.791 [2024-11-19 16:12:18.885890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid99288 for offline analysis/debug. 00:06:28.791 [2024-11-19 16:12:18.886487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.049 16:12:19 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:29.049 16:12:19 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:29.049 16:12:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:29.049 16:12:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:29.049 16:12:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:29.049 16:12:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:29.049 16:12:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.049 16:12:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.049 16:12:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.049 ************************************ 00:06:29.049 START TEST rpc_integrity 00:06:29.049 ************************************ 00:06:29.049 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:29.049 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:29.049 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.049 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.049 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.049 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:29.049 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:29.049 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:29.049 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:29.049 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.049 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.049 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.049 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:29.049 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:29.049 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.049 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.049 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.049 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:29.049 { 00:06:29.049 "name": "Malloc0", 00:06:29.049 "aliases": [ 00:06:29.049 "de5698cd-c2b5-4eaf-875d-b118a519d594" 00:06:29.049 ], 00:06:29.049 "product_name": "Malloc disk", 00:06:29.049 "block_size": 512, 00:06:29.049 "num_blocks": 16384, 00:06:29.049 "uuid": "de5698cd-c2b5-4eaf-875d-b118a519d594", 00:06:29.049 "assigned_rate_limits": { 00:06:29.049 "rw_ios_per_sec": 0, 00:06:29.049 "rw_mbytes_per_sec": 0, 00:06:29.049 "r_mbytes_per_sec": 0, 00:06:29.049 "w_mbytes_per_sec": 0 00:06:29.049 }, 00:06:29.049 "claimed": false, 00:06:29.049 "zoned": false, 00:06:29.049 "supported_io_types": { 00:06:29.049 "read": true, 00:06:29.049 "write": true, 00:06:29.049 "unmap": true, 00:06:29.049 "flush": true, 00:06:29.049 "reset": true, 00:06:29.049 "nvme_admin": false, 00:06:29.049 "nvme_io": false, 00:06:29.049 "nvme_io_md": false, 00:06:29.049 "write_zeroes": true, 00:06:29.050 "zcopy": true, 00:06:29.050 "get_zone_info": false, 00:06:29.050 "zone_management": false, 00:06:29.050 "zone_append": false, 00:06:29.050 "compare": false, 00:06:29.050 "compare_and_write": false, 00:06:29.050 "abort": true, 00:06:29.050 "seek_hole": false, 00:06:29.050 "seek_data": false, 00:06:29.050 "copy": true, 00:06:29.050 "nvme_iov_md": false 00:06:29.050 }, 00:06:29.050 "memory_domains": [ 00:06:29.050 { 00:06:29.050 "dma_device_id": "system", 00:06:29.050 "dma_device_type": 1 00:06:29.050 }, 00:06:29.050 { 00:06:29.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.050 "dma_device_type": 2 00:06:29.050 } 00:06:29.050 ], 00:06:29.050 "driver_specific": {} 00:06:29.050 } 00:06:29.050 ]' 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.050 [2024-11-19 16:12:19.271560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:29.050 [2024-11-19 16:12:19.271614] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:29.050 [2024-11-19 16:12:19.271636] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe9f8e0 00:06:29.050 [2024-11-19 16:12:19.271650] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:29.050 [2024-11-19 16:12:19.272954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:29.050 [2024-11-19 16:12:19.272977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:29.050 Passthru0 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:29.050 { 00:06:29.050 "name": "Malloc0", 00:06:29.050 "aliases": [ 00:06:29.050 "de5698cd-c2b5-4eaf-875d-b118a519d594" 00:06:29.050 ], 00:06:29.050 "product_name": "Malloc disk", 00:06:29.050 "block_size": 512, 00:06:29.050 "num_blocks": 16384, 00:06:29.050 "uuid": "de5698cd-c2b5-4eaf-875d-b118a519d594", 00:06:29.050 "assigned_rate_limits": { 00:06:29.050 "rw_ios_per_sec": 0, 00:06:29.050 "rw_mbytes_per_sec": 0, 00:06:29.050 "r_mbytes_per_sec": 0, 00:06:29.050 "w_mbytes_per_sec": 0 00:06:29.050 }, 00:06:29.050 "claimed": true, 00:06:29.050 "claim_type": "exclusive_write", 00:06:29.050 "zoned": false, 00:06:29.050 "supported_io_types": { 00:06:29.050 "read": true, 00:06:29.050 "write": true, 00:06:29.050 "unmap": true, 00:06:29.050 "flush": true, 00:06:29.050 "reset": true, 00:06:29.050 "nvme_admin": false, 00:06:29.050 "nvme_io": false, 00:06:29.050 "nvme_io_md": false, 00:06:29.050 "write_zeroes": true, 00:06:29.050 "zcopy": true, 00:06:29.050 "get_zone_info": false, 00:06:29.050 "zone_management": false, 00:06:29.050 "zone_append": false, 00:06:29.050 "compare": false, 00:06:29.050 "compare_and_write": false, 00:06:29.050 "abort": true, 00:06:29.050 "seek_hole": false, 00:06:29.050 "seek_data": false, 00:06:29.050 "copy": true, 00:06:29.050 "nvme_iov_md": false 00:06:29.050 }, 00:06:29.050 "memory_domains": [ 00:06:29.050 { 00:06:29.050 "dma_device_id": "system", 00:06:29.050 "dma_device_type": 1 00:06:29.050 }, 00:06:29.050 { 00:06:29.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.050 "dma_device_type": 2 00:06:29.050 } 00:06:29.050 ], 00:06:29.050 "driver_specific": {} 00:06:29.050 }, 00:06:29.050 { 00:06:29.050 "name": "Passthru0", 00:06:29.050 "aliases": [ 00:06:29.050 "649d97b5-ced6-5476-a2b9-aa48c50ebfab" 00:06:29.050 ], 00:06:29.050 "product_name": "passthru", 00:06:29.050 "block_size": 512, 00:06:29.050 "num_blocks": 16384, 00:06:29.050 "uuid": "649d97b5-ced6-5476-a2b9-aa48c50ebfab", 00:06:29.050 "assigned_rate_limits": { 00:06:29.050 "rw_ios_per_sec": 0, 00:06:29.050 "rw_mbytes_per_sec": 0, 00:06:29.050 "r_mbytes_per_sec": 0, 00:06:29.050 "w_mbytes_per_sec": 0 00:06:29.050 }, 00:06:29.050 "claimed": false, 00:06:29.050 "zoned": false, 00:06:29.050 "supported_io_types": { 00:06:29.050 "read": true, 00:06:29.050 "write": true, 00:06:29.050 "unmap": true, 00:06:29.050 "flush": true, 00:06:29.050 "reset": true, 00:06:29.050 "nvme_admin": false, 00:06:29.050 "nvme_io": false, 00:06:29.050 "nvme_io_md": false, 00:06:29.050 "write_zeroes": true, 00:06:29.050 "zcopy": true, 00:06:29.050 "get_zone_info": false, 00:06:29.050 "zone_management": false, 00:06:29.050 "zone_append": false, 00:06:29.050 "compare": false, 00:06:29.050 "compare_and_write": false, 00:06:29.050 "abort": true, 00:06:29.050 "seek_hole": false, 00:06:29.050 "seek_data": false, 00:06:29.050 "copy": true, 00:06:29.050 "nvme_iov_md": false 00:06:29.050 }, 00:06:29.050 "memory_domains": [ 00:06:29.050 { 00:06:29.050 "dma_device_id": "system", 00:06:29.050 "dma_device_type": 1 00:06:29.050 }, 00:06:29.050 { 00:06:29.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.050 "dma_device_type": 2 00:06:29.050 } 00:06:29.050 ], 00:06:29.050 "driver_specific": { 00:06:29.050 "passthru": { 00:06:29.050 "name": "Passthru0", 00:06:29.050 "base_bdev_name": "Malloc0" 00:06:29.050 } 00:06:29.050 } 00:06:29.050 } 00:06:29.050 ]' 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:29.050 16:12:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:29.050 00:06:29.050 real 0m0.217s 00:06:29.050 user 0m0.141s 00:06:29.050 sys 0m0.018s 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.050 16:12:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.050 ************************************ 00:06:29.050 END TEST rpc_integrity 00:06:29.050 ************************************ 00:06:29.309 16:12:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:29.309 16:12:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.309 16:12:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.309 16:12:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.309 ************************************ 00:06:29.309 START TEST rpc_plugins 00:06:29.309 ************************************ 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:29.309 { 00:06:29.309 "name": "Malloc1", 00:06:29.309 "aliases": [ 00:06:29.309 "05ecb3e6-7c42-409a-bac0-2fac1bc99a19" 00:06:29.309 ], 00:06:29.309 "product_name": "Malloc disk", 00:06:29.309 "block_size": 4096, 00:06:29.309 "num_blocks": 256, 00:06:29.309 "uuid": "05ecb3e6-7c42-409a-bac0-2fac1bc99a19", 00:06:29.309 "assigned_rate_limits": { 00:06:29.309 "rw_ios_per_sec": 0, 00:06:29.309 "rw_mbytes_per_sec": 0, 00:06:29.309 "r_mbytes_per_sec": 0, 00:06:29.309 "w_mbytes_per_sec": 0 00:06:29.309 }, 00:06:29.309 "claimed": false, 00:06:29.309 "zoned": false, 00:06:29.309 "supported_io_types": { 00:06:29.309 "read": true, 00:06:29.309 "write": true, 00:06:29.309 "unmap": true, 00:06:29.309 "flush": true, 00:06:29.309 "reset": true, 00:06:29.309 "nvme_admin": false, 00:06:29.309 "nvme_io": false, 00:06:29.309 "nvme_io_md": false, 00:06:29.309 "write_zeroes": true, 00:06:29.309 "zcopy": true, 00:06:29.309 "get_zone_info": false, 00:06:29.309 "zone_management": false, 00:06:29.309 "zone_append": false, 00:06:29.309 "compare": false, 00:06:29.309 "compare_and_write": false, 00:06:29.309 "abort": true, 00:06:29.309 "seek_hole": false, 00:06:29.309 "seek_data": false, 00:06:29.309 "copy": true, 00:06:29.309 "nvme_iov_md": false 00:06:29.309 }, 00:06:29.309 "memory_domains": [ 00:06:29.309 { 00:06:29.309 "dma_device_id": "system", 00:06:29.309 "dma_device_type": 1 00:06:29.309 }, 00:06:29.309 { 00:06:29.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.309 "dma_device_type": 2 00:06:29.309 } 00:06:29.309 ], 00:06:29.309 "driver_specific": {} 00:06:29.309 } 00:06:29.309 ]' 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:29.309 16:12:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:29.309 00:06:29.309 real 0m0.119s 00:06:29.309 user 0m0.075s 00:06:29.309 sys 0m0.009s 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.309 16:12:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:29.309 ************************************ 00:06:29.309 END TEST rpc_plugins 00:06:29.309 ************************************ 00:06:29.309 16:12:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:29.309 16:12:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.309 16:12:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.309 16:12:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.309 ************************************ 00:06:29.309 START TEST rpc_trace_cmd_test 00:06:29.309 ************************************ 00:06:29.309 16:12:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:29.309 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:29.309 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:29.309 16:12:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.309 16:12:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.309 16:12:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.309 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:29.309 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid99288", 00:06:29.309 "tpoint_group_mask": "0x8", 00:06:29.309 "iscsi_conn": { 00:06:29.309 "mask": "0x2", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "scsi": { 00:06:29.309 "mask": "0x4", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "bdev": { 00:06:29.309 "mask": "0x8", 00:06:29.309 "tpoint_mask": "0xffffffffffffffff" 00:06:29.309 }, 00:06:29.309 "nvmf_rdma": { 00:06:29.309 "mask": "0x10", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "nvmf_tcp": { 00:06:29.309 "mask": "0x20", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "ftl": { 00:06:29.309 "mask": "0x40", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "blobfs": { 00:06:29.309 "mask": "0x80", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "dsa": { 00:06:29.309 "mask": "0x200", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "thread": { 00:06:29.309 "mask": "0x400", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "nvme_pcie": { 00:06:29.309 "mask": "0x800", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "iaa": { 00:06:29.309 "mask": "0x1000", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "nvme_tcp": { 00:06:29.309 "mask": "0x2000", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "bdev_nvme": { 00:06:29.309 "mask": "0x4000", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "sock": { 00:06:29.309 "mask": "0x8000", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "blob": { 00:06:29.309 "mask": "0x10000", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "bdev_raid": { 00:06:29.309 "mask": "0x20000", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 }, 00:06:29.309 "scheduler": { 00:06:29.309 "mask": "0x40000", 00:06:29.309 "tpoint_mask": "0x0" 00:06:29.309 } 00:06:29.309 }' 00:06:29.309 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:29.569 00:06:29.569 real 0m0.185s 00:06:29.569 user 0m0.160s 00:06:29.569 sys 0m0.016s 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.569 16:12:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:29.569 ************************************ 00:06:29.569 END TEST rpc_trace_cmd_test 00:06:29.569 ************************************ 00:06:29.569 16:12:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:29.569 16:12:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:29.569 16:12:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:29.569 16:12:19 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:29.569 16:12:19 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:29.569 16:12:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:29.569 ************************************ 00:06:29.569 START TEST rpc_daemon_integrity 00:06:29.569 ************************************ 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.569 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:29.569 { 00:06:29.569 "name": "Malloc2", 00:06:29.569 "aliases": [ 00:06:29.569 "bd8f274f-d67a-42ba-950d-00440592aafd" 00:06:29.569 ], 00:06:29.569 "product_name": "Malloc disk", 00:06:29.569 "block_size": 512, 00:06:29.569 "num_blocks": 16384, 00:06:29.569 "uuid": "bd8f274f-d67a-42ba-950d-00440592aafd", 00:06:29.569 "assigned_rate_limits": { 00:06:29.569 "rw_ios_per_sec": 0, 00:06:29.569 "rw_mbytes_per_sec": 0, 00:06:29.569 "r_mbytes_per_sec": 0, 00:06:29.569 "w_mbytes_per_sec": 0 00:06:29.569 }, 00:06:29.569 "claimed": false, 00:06:29.569 "zoned": false, 00:06:29.569 "supported_io_types": { 00:06:29.569 "read": true, 00:06:29.569 "write": true, 00:06:29.569 "unmap": true, 00:06:29.569 "flush": true, 00:06:29.569 "reset": true, 00:06:29.569 "nvme_admin": false, 00:06:29.569 "nvme_io": false, 00:06:29.569 "nvme_io_md": false, 00:06:29.569 "write_zeroes": true, 00:06:29.570 "zcopy": true, 00:06:29.570 "get_zone_info": false, 00:06:29.570 "zone_management": false, 00:06:29.570 "zone_append": false, 00:06:29.570 "compare": false, 00:06:29.570 "compare_and_write": false, 00:06:29.570 "abort": true, 00:06:29.570 "seek_hole": false, 00:06:29.570 "seek_data": false, 00:06:29.570 "copy": true, 00:06:29.570 "nvme_iov_md": false 00:06:29.570 }, 00:06:29.570 "memory_domains": [ 00:06:29.570 { 00:06:29.570 "dma_device_id": "system", 00:06:29.570 "dma_device_type": 1 00:06:29.570 }, 00:06:29.570 { 00:06:29.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.570 "dma_device_type": 2 00:06:29.570 } 00:06:29.570 ], 00:06:29.570 "driver_specific": {} 00:06:29.570 } 00:06:29.570 ]' 00:06:29.570 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:29.829 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:29.829 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:29.829 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.829 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.829 [2024-11-19 16:12:19.929512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:29.829 [2024-11-19 16:12:19.929553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:29.829 [2024-11-19 16:12:19.929589] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xfcf6f0 00:06:29.829 [2024-11-19 16:12:19.929603] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:29.829 [2024-11-19 16:12:19.930789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:29.829 [2024-11-19 16:12:19.930811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:29.829 Passthru0 00:06:29.829 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.829 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:29.829 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.829 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.829 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.829 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:29.829 { 00:06:29.829 "name": "Malloc2", 00:06:29.829 "aliases": [ 00:06:29.829 "bd8f274f-d67a-42ba-950d-00440592aafd" 00:06:29.829 ], 00:06:29.829 "product_name": "Malloc disk", 00:06:29.829 "block_size": 512, 00:06:29.829 "num_blocks": 16384, 00:06:29.829 "uuid": "bd8f274f-d67a-42ba-950d-00440592aafd", 00:06:29.829 "assigned_rate_limits": { 00:06:29.829 "rw_ios_per_sec": 0, 00:06:29.829 "rw_mbytes_per_sec": 0, 00:06:29.829 "r_mbytes_per_sec": 0, 00:06:29.829 "w_mbytes_per_sec": 0 00:06:29.829 }, 00:06:29.829 "claimed": true, 00:06:29.829 "claim_type": "exclusive_write", 00:06:29.829 "zoned": false, 00:06:29.829 "supported_io_types": { 00:06:29.829 "read": true, 00:06:29.829 "write": true, 00:06:29.829 "unmap": true, 00:06:29.829 "flush": true, 00:06:29.829 "reset": true, 00:06:29.829 "nvme_admin": false, 00:06:29.829 "nvme_io": false, 00:06:29.829 "nvme_io_md": false, 00:06:29.829 "write_zeroes": true, 00:06:29.829 "zcopy": true, 00:06:29.829 "get_zone_info": false, 00:06:29.829 "zone_management": false, 00:06:29.829 "zone_append": false, 00:06:29.829 "compare": false, 00:06:29.829 "compare_and_write": false, 00:06:29.829 "abort": true, 00:06:29.829 "seek_hole": false, 00:06:29.829 "seek_data": false, 00:06:29.829 "copy": true, 00:06:29.829 "nvme_iov_md": false 00:06:29.829 }, 00:06:29.829 "memory_domains": [ 00:06:29.829 { 00:06:29.829 "dma_device_id": "system", 00:06:29.829 "dma_device_type": 1 00:06:29.829 }, 00:06:29.829 { 00:06:29.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.829 "dma_device_type": 2 00:06:29.829 } 00:06:29.829 ], 00:06:29.829 "driver_specific": {} 00:06:29.829 }, 00:06:29.829 { 00:06:29.829 "name": "Passthru0", 00:06:29.829 "aliases": [ 00:06:29.829 "b531f2b2-e98a-5e7e-b8a3-97671e41dde4" 00:06:29.830 ], 00:06:29.830 "product_name": "passthru", 00:06:29.830 "block_size": 512, 00:06:29.830 "num_blocks": 16384, 00:06:29.830 "uuid": "b531f2b2-e98a-5e7e-b8a3-97671e41dde4", 00:06:29.830 "assigned_rate_limits": { 00:06:29.830 "rw_ios_per_sec": 0, 00:06:29.830 "rw_mbytes_per_sec": 0, 00:06:29.830 "r_mbytes_per_sec": 0, 00:06:29.830 "w_mbytes_per_sec": 0 00:06:29.830 }, 00:06:29.830 "claimed": false, 00:06:29.830 "zoned": false, 00:06:29.830 "supported_io_types": { 00:06:29.830 "read": true, 00:06:29.830 "write": true, 00:06:29.830 "unmap": true, 00:06:29.830 "flush": true, 00:06:29.830 "reset": true, 00:06:29.830 "nvme_admin": false, 00:06:29.830 "nvme_io": false, 00:06:29.830 "nvme_io_md": false, 00:06:29.830 "write_zeroes": true, 00:06:29.830 "zcopy": true, 00:06:29.830 "get_zone_info": false, 00:06:29.830 "zone_management": false, 00:06:29.830 "zone_append": false, 00:06:29.830 "compare": false, 00:06:29.830 "compare_and_write": false, 00:06:29.830 "abort": true, 00:06:29.830 "seek_hole": false, 00:06:29.830 "seek_data": false, 00:06:29.830 "copy": true, 00:06:29.830 "nvme_iov_md": false 00:06:29.830 }, 00:06:29.830 "memory_domains": [ 00:06:29.830 { 00:06:29.830 "dma_device_id": "system", 00:06:29.830 "dma_device_type": 1 00:06:29.830 }, 00:06:29.830 { 00:06:29.830 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:29.830 "dma_device_type": 2 00:06:29.830 } 00:06:29.830 ], 00:06:29.830 "driver_specific": { 00:06:29.830 "passthru": { 00:06:29.830 "name": "Passthru0", 00:06:29.830 "base_bdev_name": "Malloc2" 00:06:29.830 } 00:06:29.830 } 00:06:29.830 } 00:06:29.830 ]' 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:29.830 16:12:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.830 16:12:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:29.830 16:12:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:29.830 16:12:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:29.830 16:12:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:29.830 00:06:29.830 real 0m0.215s 00:06:29.830 user 0m0.133s 00:06:29.830 sys 0m0.025s 00:06:29.830 16:12:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:29.830 16:12:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:29.830 ************************************ 00:06:29.830 END TEST rpc_daemon_integrity 00:06:29.830 ************************************ 00:06:29.830 16:12:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:29.830 16:12:20 rpc -- rpc/rpc.sh@84 -- # killprocess 99288 00:06:29.830 16:12:20 rpc -- common/autotest_common.sh@954 -- # '[' -z 99288 ']' 00:06:29.830 16:12:20 rpc -- common/autotest_common.sh@958 -- # kill -0 99288 00:06:29.830 16:12:20 rpc -- common/autotest_common.sh@959 -- # uname 00:06:29.830 16:12:20 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:29.830 16:12:20 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99288 00:06:29.830 16:12:20 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:29.830 16:12:20 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:29.830 16:12:20 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99288' 00:06:29.830 killing process with pid 99288 00:06:29.830 16:12:20 rpc -- common/autotest_common.sh@973 -- # kill 99288 00:06:29.830 16:12:20 rpc -- common/autotest_common.sh@978 -- # wait 99288 00:06:30.398 00:06:30.398 real 0m1.901s 00:06:30.398 user 0m2.390s 00:06:30.398 sys 0m0.571s 00:06:30.398 16:12:20 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.398 16:12:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.398 ************************************ 00:06:30.398 END TEST rpc 00:06:30.398 ************************************ 00:06:30.398 16:12:20 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:30.398 16:12:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.398 16:12:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.398 16:12:20 -- common/autotest_common.sh@10 -- # set +x 00:06:30.398 ************************************ 00:06:30.398 START TEST skip_rpc 00:06:30.398 ************************************ 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:30.398 * Looking for test storage... 00:06:30.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.398 16:12:20 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.398 --rc genhtml_branch_coverage=1 00:06:30.398 --rc genhtml_function_coverage=1 00:06:30.398 --rc genhtml_legend=1 00:06:30.398 --rc geninfo_all_blocks=1 00:06:30.398 --rc geninfo_unexecuted_blocks=1 00:06:30.398 00:06:30.398 ' 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.398 --rc genhtml_branch_coverage=1 00:06:30.398 --rc genhtml_function_coverage=1 00:06:30.398 --rc genhtml_legend=1 00:06:30.398 --rc geninfo_all_blocks=1 00:06:30.398 --rc geninfo_unexecuted_blocks=1 00:06:30.398 00:06:30.398 ' 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.398 --rc genhtml_branch_coverage=1 00:06:30.398 --rc genhtml_function_coverage=1 00:06:30.398 --rc genhtml_legend=1 00:06:30.398 --rc geninfo_all_blocks=1 00:06:30.398 --rc geninfo_unexecuted_blocks=1 00:06:30.398 00:06:30.398 ' 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.398 --rc genhtml_branch_coverage=1 00:06:30.398 --rc genhtml_function_coverage=1 00:06:30.398 --rc genhtml_legend=1 00:06:30.398 --rc geninfo_all_blocks=1 00:06:30.398 --rc geninfo_unexecuted_blocks=1 00:06:30.398 00:06:30.398 ' 00:06:30.398 16:12:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:30.398 16:12:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:30.398 16:12:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.398 16:12:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.398 ************************************ 00:06:30.398 START TEST skip_rpc 00:06:30.398 ************************************ 00:06:30.398 16:12:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:30.398 16:12:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=99695 00:06:30.398 16:12:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:30.398 16:12:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:30.398 16:12:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:30.659 [2024-11-19 16:12:20.748472] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:06:30.659 [2024-11-19 16:12:20.748554] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99695 ] 00:06:30.659 [2024-11-19 16:12:20.811468] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.659 [2024-11-19 16:12:20.857119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 99695 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 99695 ']' 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 99695 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 99695 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 99695' 00:06:35.924 killing process with pid 99695 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 99695 00:06:35.924 16:12:25 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 99695 00:06:35.924 00:06:35.924 real 0m5.414s 00:06:35.924 user 0m5.110s 00:06:35.924 sys 0m0.309s 00:06:35.924 16:12:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.924 16:12:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.924 ************************************ 00:06:35.924 END TEST skip_rpc 00:06:35.924 ************************************ 00:06:35.924 16:12:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:35.924 16:12:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.924 16:12:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.924 16:12:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:35.924 ************************************ 00:06:35.924 START TEST skip_rpc_with_json 00:06:35.924 ************************************ 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=100386 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 100386 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 100386 ']' 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.924 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:35.924 [2024-11-19 16:12:26.214639] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:06:35.924 [2024-11-19 16:12:26.214725] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100386 ] 00:06:36.184 [2024-11-19 16:12:26.280115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.184 [2024-11-19 16:12:26.322449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.443 [2024-11-19 16:12:26.572367] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:36.443 request: 00:06:36.443 { 00:06:36.443 "trtype": "tcp", 00:06:36.443 "method": "nvmf_get_transports", 00:06:36.443 "req_id": 1 00:06:36.443 } 00:06:36.443 Got JSON-RPC error response 00:06:36.443 response: 00:06:36.443 { 00:06:36.443 "code": -19, 00:06:36.443 "message": "No such device" 00:06:36.443 } 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.443 [2024-11-19 16:12:26.580490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:36.443 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:36.443 { 00:06:36.443 "subsystems": [ 00:06:36.444 { 00:06:36.444 "subsystem": "fsdev", 00:06:36.444 "config": [ 00:06:36.444 { 00:06:36.444 "method": "fsdev_set_opts", 00:06:36.444 "params": { 00:06:36.444 "fsdev_io_pool_size": 65535, 00:06:36.444 "fsdev_io_cache_size": 256 00:06:36.444 } 00:06:36.444 } 00:06:36.444 ] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "vfio_user_target", 00:06:36.444 "config": null 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "keyring", 00:06:36.444 "config": [] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "iobuf", 00:06:36.444 "config": [ 00:06:36.444 { 00:06:36.444 "method": "iobuf_set_options", 00:06:36.444 "params": { 00:06:36.444 "small_pool_count": 8192, 00:06:36.444 "large_pool_count": 1024, 00:06:36.444 "small_bufsize": 8192, 00:06:36.444 "large_bufsize": 135168, 00:06:36.444 "enable_numa": false 00:06:36.444 } 00:06:36.444 } 00:06:36.444 ] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "sock", 00:06:36.444 "config": [ 00:06:36.444 { 00:06:36.444 "method": "sock_set_default_impl", 00:06:36.444 "params": { 00:06:36.444 "impl_name": "posix" 00:06:36.444 } 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "method": "sock_impl_set_options", 00:06:36.444 "params": { 00:06:36.444 "impl_name": "ssl", 00:06:36.444 "recv_buf_size": 4096, 00:06:36.444 "send_buf_size": 4096, 00:06:36.444 "enable_recv_pipe": true, 00:06:36.444 "enable_quickack": false, 00:06:36.444 "enable_placement_id": 0, 00:06:36.444 "enable_zerocopy_send_server": true, 00:06:36.444 "enable_zerocopy_send_client": false, 00:06:36.444 "zerocopy_threshold": 0, 00:06:36.444 "tls_version": 0, 00:06:36.444 "enable_ktls": false 00:06:36.444 } 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "method": "sock_impl_set_options", 00:06:36.444 "params": { 00:06:36.444 "impl_name": "posix", 00:06:36.444 "recv_buf_size": 2097152, 00:06:36.444 "send_buf_size": 2097152, 00:06:36.444 "enable_recv_pipe": true, 00:06:36.444 "enable_quickack": false, 00:06:36.444 "enable_placement_id": 0, 00:06:36.444 "enable_zerocopy_send_server": true, 00:06:36.444 "enable_zerocopy_send_client": false, 00:06:36.444 "zerocopy_threshold": 0, 00:06:36.444 "tls_version": 0, 00:06:36.444 "enable_ktls": false 00:06:36.444 } 00:06:36.444 } 00:06:36.444 ] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "vmd", 00:06:36.444 "config": [] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "accel", 00:06:36.444 "config": [ 00:06:36.444 { 00:06:36.444 "method": "accel_set_options", 00:06:36.444 "params": { 00:06:36.444 "small_cache_size": 128, 00:06:36.444 "large_cache_size": 16, 00:06:36.444 "task_count": 2048, 00:06:36.444 "sequence_count": 2048, 00:06:36.444 "buf_count": 2048 00:06:36.444 } 00:06:36.444 } 00:06:36.444 ] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "bdev", 00:06:36.444 "config": [ 00:06:36.444 { 00:06:36.444 "method": "bdev_set_options", 00:06:36.444 "params": { 00:06:36.444 "bdev_io_pool_size": 65535, 00:06:36.444 "bdev_io_cache_size": 256, 00:06:36.444 "bdev_auto_examine": true, 00:06:36.444 "iobuf_small_cache_size": 128, 00:06:36.444 "iobuf_large_cache_size": 16 00:06:36.444 } 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "method": "bdev_raid_set_options", 00:06:36.444 "params": { 00:06:36.444 "process_window_size_kb": 1024, 00:06:36.444 "process_max_bandwidth_mb_sec": 0 00:06:36.444 } 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "method": "bdev_iscsi_set_options", 00:06:36.444 "params": { 00:06:36.444 "timeout_sec": 30 00:06:36.444 } 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "method": "bdev_nvme_set_options", 00:06:36.444 "params": { 00:06:36.444 "action_on_timeout": "none", 00:06:36.444 "timeout_us": 0, 00:06:36.444 "timeout_admin_us": 0, 00:06:36.444 "keep_alive_timeout_ms": 10000, 00:06:36.444 "arbitration_burst": 0, 00:06:36.444 "low_priority_weight": 0, 00:06:36.444 "medium_priority_weight": 0, 00:06:36.444 "high_priority_weight": 0, 00:06:36.444 "nvme_adminq_poll_period_us": 10000, 00:06:36.444 "nvme_ioq_poll_period_us": 0, 00:06:36.444 "io_queue_requests": 0, 00:06:36.444 "delay_cmd_submit": true, 00:06:36.444 "transport_retry_count": 4, 00:06:36.444 "bdev_retry_count": 3, 00:06:36.444 "transport_ack_timeout": 0, 00:06:36.444 "ctrlr_loss_timeout_sec": 0, 00:06:36.444 "reconnect_delay_sec": 0, 00:06:36.444 "fast_io_fail_timeout_sec": 0, 00:06:36.444 "disable_auto_failback": false, 00:06:36.444 "generate_uuids": false, 00:06:36.444 "transport_tos": 0, 00:06:36.444 "nvme_error_stat": false, 00:06:36.444 "rdma_srq_size": 0, 00:06:36.444 "io_path_stat": false, 00:06:36.444 "allow_accel_sequence": false, 00:06:36.444 "rdma_max_cq_size": 0, 00:06:36.444 "rdma_cm_event_timeout_ms": 0, 00:06:36.444 "dhchap_digests": [ 00:06:36.444 "sha256", 00:06:36.444 "sha384", 00:06:36.444 "sha512" 00:06:36.444 ], 00:06:36.444 "dhchap_dhgroups": [ 00:06:36.444 "null", 00:06:36.444 "ffdhe2048", 00:06:36.444 "ffdhe3072", 00:06:36.444 "ffdhe4096", 00:06:36.444 "ffdhe6144", 00:06:36.444 "ffdhe8192" 00:06:36.444 ] 00:06:36.444 } 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "method": "bdev_nvme_set_hotplug", 00:06:36.444 "params": { 00:06:36.444 "period_us": 100000, 00:06:36.444 "enable": false 00:06:36.444 } 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "method": "bdev_wait_for_examine" 00:06:36.444 } 00:06:36.444 ] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "scsi", 00:06:36.444 "config": null 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "scheduler", 00:06:36.444 "config": [ 00:06:36.444 { 00:06:36.444 "method": "framework_set_scheduler", 00:06:36.444 "params": { 00:06:36.444 "name": "static" 00:06:36.444 } 00:06:36.444 } 00:06:36.444 ] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "vhost_scsi", 00:06:36.444 "config": [] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "vhost_blk", 00:06:36.444 "config": [] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "ublk", 00:06:36.444 "config": [] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "nbd", 00:06:36.444 "config": [] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "nvmf", 00:06:36.444 "config": [ 00:06:36.444 { 00:06:36.444 "method": "nvmf_set_config", 00:06:36.444 "params": { 00:06:36.444 "discovery_filter": "match_any", 00:06:36.444 "admin_cmd_passthru": { 00:06:36.444 "identify_ctrlr": false 00:06:36.444 }, 00:06:36.444 "dhchap_digests": [ 00:06:36.444 "sha256", 00:06:36.444 "sha384", 00:06:36.444 "sha512" 00:06:36.444 ], 00:06:36.444 "dhchap_dhgroups": [ 00:06:36.444 "null", 00:06:36.444 "ffdhe2048", 00:06:36.444 "ffdhe3072", 00:06:36.444 "ffdhe4096", 00:06:36.444 "ffdhe6144", 00:06:36.444 "ffdhe8192" 00:06:36.444 ] 00:06:36.444 } 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "method": "nvmf_set_max_subsystems", 00:06:36.444 "params": { 00:06:36.444 "max_subsystems": 1024 00:06:36.444 } 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "method": "nvmf_set_crdt", 00:06:36.444 "params": { 00:06:36.444 "crdt1": 0, 00:06:36.444 "crdt2": 0, 00:06:36.444 "crdt3": 0 00:06:36.444 } 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "method": "nvmf_create_transport", 00:06:36.444 "params": { 00:06:36.444 "trtype": "TCP", 00:06:36.444 "max_queue_depth": 128, 00:06:36.444 "max_io_qpairs_per_ctrlr": 127, 00:06:36.444 "in_capsule_data_size": 4096, 00:06:36.444 "max_io_size": 131072, 00:06:36.444 "io_unit_size": 131072, 00:06:36.444 "max_aq_depth": 128, 00:06:36.444 "num_shared_buffers": 511, 00:06:36.444 "buf_cache_size": 4294967295, 00:06:36.444 "dif_insert_or_strip": false, 00:06:36.444 "zcopy": false, 00:06:36.444 "c2h_success": true, 00:06:36.444 "sock_priority": 0, 00:06:36.444 "abort_timeout_sec": 1, 00:06:36.444 "ack_timeout": 0, 00:06:36.444 "data_wr_pool_size": 0 00:06:36.444 } 00:06:36.444 } 00:06:36.444 ] 00:06:36.444 }, 00:06:36.444 { 00:06:36.444 "subsystem": "iscsi", 00:06:36.444 "config": [ 00:06:36.444 { 00:06:36.444 "method": "iscsi_set_options", 00:06:36.444 "params": { 00:06:36.444 "node_base": "iqn.2016-06.io.spdk", 00:06:36.444 "max_sessions": 128, 00:06:36.444 "max_connections_per_session": 2, 00:06:36.444 "max_queue_depth": 64, 00:06:36.444 "default_time2wait": 2, 00:06:36.444 "default_time2retain": 20, 00:06:36.444 "first_burst_length": 8192, 00:06:36.444 "immediate_data": true, 00:06:36.444 "allow_duplicated_isid": false, 00:06:36.444 "error_recovery_level": 0, 00:06:36.444 "nop_timeout": 60, 00:06:36.444 "nop_in_interval": 30, 00:06:36.445 "disable_chap": false, 00:06:36.445 "require_chap": false, 00:06:36.445 "mutual_chap": false, 00:06:36.445 "chap_group": 0, 00:06:36.445 "max_large_datain_per_connection": 64, 00:06:36.445 "max_r2t_per_connection": 4, 00:06:36.445 "pdu_pool_size": 36864, 00:06:36.445 "immediate_data_pool_size": 16384, 00:06:36.445 "data_out_pool_size": 2048 00:06:36.445 } 00:06:36.445 } 00:06:36.445 ] 00:06:36.445 } 00:06:36.445 ] 00:06:36.445 } 00:06:36.445 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:36.445 16:12:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 100386 00:06:36.445 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100386 ']' 00:06:36.445 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100386 00:06:36.445 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:36.445 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.445 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100386 00:06:36.702 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.703 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.703 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100386' 00:06:36.703 killing process with pid 100386 00:06:36.703 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100386 00:06:36.703 16:12:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100386 00:06:36.961 16:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=100528 00:06:36.961 16:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:36.961 16:12:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 100528 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 100528 ']' 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 100528 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 100528 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 100528' 00:06:42.230 killing process with pid 100528 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 100528 00:06:42.230 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 100528 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:42.491 00:06:42.491 real 0m6.415s 00:06:42.491 user 0m6.077s 00:06:42.491 sys 0m0.657s 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:42.491 ************************************ 00:06:42.491 END TEST skip_rpc_with_json 00:06:42.491 ************************************ 00:06:42.491 16:12:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:42.491 16:12:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.491 16:12:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.491 16:12:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.491 ************************************ 00:06:42.491 START TEST skip_rpc_with_delay 00:06:42.491 ************************************ 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:42.491 [2024-11-19 16:12:32.682200] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:42.491 00:06:42.491 real 0m0.073s 00:06:42.491 user 0m0.045s 00:06:42.491 sys 0m0.027s 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.491 16:12:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:42.491 ************************************ 00:06:42.491 END TEST skip_rpc_with_delay 00:06:42.491 ************************************ 00:06:42.491 16:12:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:42.491 16:12:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:42.491 16:12:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:42.491 16:12:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.491 16:12:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.491 16:12:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.491 ************************************ 00:06:42.491 START TEST exit_on_failed_rpc_init 00:06:42.491 ************************************ 00:06:42.491 16:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:42.491 16:12:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=101237 00:06:42.491 16:12:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.491 16:12:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 101237 00:06:42.491 16:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 101237 ']' 00:06:42.491 16:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.491 16:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.491 16:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.491 16:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.491 16:12:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:42.491 [2024-11-19 16:12:32.806160] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:06:42.491 [2024-11-19 16:12:32.806257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101237 ] 00:06:42.750 [2024-11-19 16:12:32.873646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.750 [2024-11-19 16:12:32.923377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:43.009 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:43.009 [2024-11-19 16:12:33.235464] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:06:43.009 [2024-11-19 16:12:33.235568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101256 ] 00:06:43.009 [2024-11-19 16:12:33.300950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.268 [2024-11-19 16:12:33.348570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.268 [2024-11-19 16:12:33.348681] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:43.268 [2024-11-19 16:12:33.348700] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:43.268 [2024-11-19 16:12:33.348712] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 101237 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 101237 ']' 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 101237 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101237 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101237' 00:06:43.268 killing process with pid 101237 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 101237 00:06:43.268 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 101237 00:06:43.527 00:06:43.527 real 0m1.073s 00:06:43.527 user 0m1.161s 00:06:43.527 sys 0m0.424s 00:06:43.527 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.527 16:12:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:43.527 ************************************ 00:06:43.527 END TEST exit_on_failed_rpc_init 00:06:43.527 ************************************ 00:06:43.527 16:12:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:43.527 00:06:43.527 real 0m13.325s 00:06:43.527 user 0m12.587s 00:06:43.527 sys 0m1.593s 00:06:43.527 16:12:33 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.527 16:12:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:43.527 ************************************ 00:06:43.527 END TEST skip_rpc 00:06:43.527 ************************************ 00:06:43.787 16:12:33 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:43.787 16:12:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.787 16:12:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.787 16:12:33 -- common/autotest_common.sh@10 -- # set +x 00:06:43.787 ************************************ 00:06:43.787 START TEST rpc_client 00:06:43.787 ************************************ 00:06:43.787 16:12:33 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:43.787 * Looking for test storage... 00:06:43.787 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:43.787 16:12:33 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:43.787 16:12:33 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:43.787 16:12:33 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:43.787 16:12:34 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:43.787 16:12:34 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:43.787 16:12:34 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.787 16:12:34 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:43.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.787 --rc genhtml_branch_coverage=1 00:06:43.787 --rc genhtml_function_coverage=1 00:06:43.787 --rc genhtml_legend=1 00:06:43.787 --rc geninfo_all_blocks=1 00:06:43.787 --rc geninfo_unexecuted_blocks=1 00:06:43.787 00:06:43.787 ' 00:06:43.787 16:12:34 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:43.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.787 --rc genhtml_branch_coverage=1 00:06:43.787 --rc genhtml_function_coverage=1 00:06:43.787 --rc genhtml_legend=1 00:06:43.787 --rc geninfo_all_blocks=1 00:06:43.787 --rc geninfo_unexecuted_blocks=1 00:06:43.787 00:06:43.787 ' 00:06:43.787 16:12:34 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:43.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.787 --rc genhtml_branch_coverage=1 00:06:43.787 --rc genhtml_function_coverage=1 00:06:43.787 --rc genhtml_legend=1 00:06:43.787 --rc geninfo_all_blocks=1 00:06:43.787 --rc geninfo_unexecuted_blocks=1 00:06:43.787 00:06:43.787 ' 00:06:43.787 16:12:34 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:43.787 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.787 --rc genhtml_branch_coverage=1 00:06:43.787 --rc genhtml_function_coverage=1 00:06:43.787 --rc genhtml_legend=1 00:06:43.787 --rc geninfo_all_blocks=1 00:06:43.787 --rc geninfo_unexecuted_blocks=1 00:06:43.787 00:06:43.787 ' 00:06:43.787 16:12:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:43.787 OK 00:06:43.787 16:12:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:43.787 00:06:43.787 real 0m0.148s 00:06:43.787 user 0m0.101s 00:06:43.787 sys 0m0.056s 00:06:43.787 16:12:34 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.787 16:12:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:43.787 ************************************ 00:06:43.787 END TEST rpc_client 00:06:43.787 ************************************ 00:06:43.787 16:12:34 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:43.787 16:12:34 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:43.787 16:12:34 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:43.787 16:12:34 -- common/autotest_common.sh@10 -- # set +x 00:06:43.787 ************************************ 00:06:43.787 START TEST json_config 00:06:43.787 ************************************ 00:06:43.787 16:12:34 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:44.048 16:12:34 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:44.048 16:12:34 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:44.048 16:12:34 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:44.048 16:12:34 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:44.048 16:12:34 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.048 16:12:34 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.048 16:12:34 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.048 16:12:34 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.048 16:12:34 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.048 16:12:34 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.048 16:12:34 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.048 16:12:34 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.048 16:12:34 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.048 16:12:34 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.048 16:12:34 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.048 16:12:34 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:44.048 16:12:34 json_config -- scripts/common.sh@345 -- # : 1 00:06:44.048 16:12:34 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.048 16:12:34 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.048 16:12:34 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:44.048 16:12:34 json_config -- scripts/common.sh@353 -- # local d=1 00:06:44.048 16:12:34 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.048 16:12:34 json_config -- scripts/common.sh@355 -- # echo 1 00:06:44.048 16:12:34 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.048 16:12:34 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:44.048 16:12:34 json_config -- scripts/common.sh@353 -- # local d=2 00:06:44.048 16:12:34 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.048 16:12:34 json_config -- scripts/common.sh@355 -- # echo 2 00:06:44.048 16:12:34 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.048 16:12:34 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.048 16:12:34 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.048 16:12:34 json_config -- scripts/common.sh@368 -- # return 0 00:06:44.048 16:12:34 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.048 16:12:34 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:44.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.048 --rc genhtml_branch_coverage=1 00:06:44.048 --rc genhtml_function_coverage=1 00:06:44.048 --rc genhtml_legend=1 00:06:44.048 --rc geninfo_all_blocks=1 00:06:44.048 --rc geninfo_unexecuted_blocks=1 00:06:44.048 00:06:44.048 ' 00:06:44.048 16:12:34 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:44.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.048 --rc genhtml_branch_coverage=1 00:06:44.048 --rc genhtml_function_coverage=1 00:06:44.048 --rc genhtml_legend=1 00:06:44.048 --rc geninfo_all_blocks=1 00:06:44.048 --rc geninfo_unexecuted_blocks=1 00:06:44.048 00:06:44.048 ' 00:06:44.048 16:12:34 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:44.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.048 --rc genhtml_branch_coverage=1 00:06:44.048 --rc genhtml_function_coverage=1 00:06:44.048 --rc genhtml_legend=1 00:06:44.048 --rc geninfo_all_blocks=1 00:06:44.048 --rc geninfo_unexecuted_blocks=1 00:06:44.048 00:06:44.048 ' 00:06:44.048 16:12:34 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:44.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.048 --rc genhtml_branch_coverage=1 00:06:44.048 --rc genhtml_function_coverage=1 00:06:44.048 --rc genhtml_legend=1 00:06:44.048 --rc geninfo_all_blocks=1 00:06:44.048 --rc geninfo_unexecuted_blocks=1 00:06:44.048 00:06:44.048 ' 00:06:44.048 16:12:34 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.048 16:12:34 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.048 16:12:34 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.048 16:12:34 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.048 16:12:34 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.048 16:12:34 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.048 16:12:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.048 16:12:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.048 16:12:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.049 16:12:34 json_config -- paths/export.sh@5 -- # export PATH 00:06:44.049 16:12:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.049 16:12:34 json_config -- nvmf/common.sh@51 -- # : 0 00:06:44.049 16:12:34 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.049 16:12:34 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.049 16:12:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.049 16:12:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.049 16:12:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.049 16:12:34 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.049 16:12:34 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.049 16:12:34 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.049 16:12:34 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:44.049 INFO: JSON configuration test init 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:44.049 16:12:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.049 16:12:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:44.049 16:12:34 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:44.049 16:12:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.049 16:12:34 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:44.049 16:12:34 json_config -- json_config/common.sh@9 -- # local app=target 00:06:44.049 16:12:34 json_config -- json_config/common.sh@10 -- # shift 00:06:44.049 16:12:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:44.049 16:12:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:44.049 16:12:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:44.049 16:12:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.049 16:12:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:44.049 16:12:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=101514 00:06:44.049 16:12:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:44.049 16:12:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:44.049 Waiting for target to run... 00:06:44.049 16:12:34 json_config -- json_config/common.sh@25 -- # waitforlisten 101514 /var/tmp/spdk_tgt.sock 00:06:44.049 16:12:34 json_config -- common/autotest_common.sh@835 -- # '[' -z 101514 ']' 00:06:44.049 16:12:34 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:44.049 16:12:34 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:44.049 16:12:34 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:44.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:44.049 16:12:34 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:44.049 16:12:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:44.049 [2024-11-19 16:12:34.295590] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:06:44.049 [2024-11-19 16:12:34.295676] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101514 ] 00:06:44.308 [2024-11-19 16:12:34.639708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.566 [2024-11-19 16:12:34.671975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.134 16:12:35 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:45.134 16:12:35 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:45.134 16:12:35 json_config -- json_config/common.sh@26 -- # echo '' 00:06:45.134 00:06:45.134 16:12:35 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:45.134 16:12:35 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:45.134 16:12:35 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.134 16:12:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.134 16:12:35 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:45.134 16:12:35 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:45.134 16:12:35 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.134 16:12:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.134 16:12:35 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:45.134 16:12:35 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:45.134 16:12:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:48.426 16:12:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.426 16:12:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:48.426 16:12:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@54 -- # sort 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:48.426 16:12:38 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:48.426 16:12:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:48.426 16:12:38 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:48.684 16:12:38 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:48.684 16:12:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:48.684 16:12:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:48.684 16:12:38 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:48.684 16:12:38 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:48.684 16:12:38 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:48.684 16:12:38 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:48.684 16:12:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:48.684 MallocForNvmf0 00:06:48.943 16:12:39 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:48.943 16:12:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:49.201 MallocForNvmf1 00:06:49.201 16:12:39 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:49.201 16:12:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:49.458 [2024-11-19 16:12:39.544894] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:49.458 16:12:39 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:49.458 16:12:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:49.717 16:12:39 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:49.717 16:12:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:49.975 16:12:40 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:49.975 16:12:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:50.233 16:12:40 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:50.233 16:12:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:50.492 [2024-11-19 16:12:40.624366] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:50.492 16:12:40 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:50.492 16:12:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.492 16:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.492 16:12:40 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:50.492 16:12:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.492 16:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.492 16:12:40 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:50.492 16:12:40 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:50.492 16:12:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:50.750 MallocBdevForConfigChangeCheck 00:06:50.750 16:12:40 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:50.750 16:12:40 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:50.750 16:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.750 16:12:40 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:50.750 16:12:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:51.316 16:12:41 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:51.316 INFO: shutting down applications... 00:06:51.316 16:12:41 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:51.316 16:12:41 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:51.316 16:12:41 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:51.317 16:12:41 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:52.692 Calling clear_iscsi_subsystem 00:06:52.692 Calling clear_nvmf_subsystem 00:06:52.692 Calling clear_nbd_subsystem 00:06:52.692 Calling clear_ublk_subsystem 00:06:52.692 Calling clear_vhost_blk_subsystem 00:06:52.692 Calling clear_vhost_scsi_subsystem 00:06:52.692 Calling clear_bdev_subsystem 00:06:52.692 16:12:43 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:52.692 16:12:43 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:52.692 16:12:43 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:52.692 16:12:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:52.692 16:12:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:52.692 16:12:43 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:53.260 16:12:43 json_config -- json_config/json_config.sh@352 -- # break 00:06:53.260 16:12:43 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:53.260 16:12:43 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:53.260 16:12:43 json_config -- json_config/common.sh@31 -- # local app=target 00:06:53.260 16:12:43 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:53.260 16:12:43 json_config -- json_config/common.sh@35 -- # [[ -n 101514 ]] 00:06:53.260 16:12:43 json_config -- json_config/common.sh@38 -- # kill -SIGINT 101514 00:06:53.260 16:12:43 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:53.260 16:12:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:53.260 16:12:43 json_config -- json_config/common.sh@41 -- # kill -0 101514 00:06:53.260 16:12:43 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:53.833 16:12:43 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:53.833 16:12:43 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:53.833 16:12:43 json_config -- json_config/common.sh@41 -- # kill -0 101514 00:06:53.833 16:12:43 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:53.833 16:12:43 json_config -- json_config/common.sh@43 -- # break 00:06:53.833 16:12:43 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:53.833 16:12:43 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:53.833 SPDK target shutdown done 00:06:53.833 16:12:43 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:53.833 INFO: relaunching applications... 00:06:53.833 16:12:43 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:53.833 16:12:43 json_config -- json_config/common.sh@9 -- # local app=target 00:06:53.833 16:12:43 json_config -- json_config/common.sh@10 -- # shift 00:06:53.833 16:12:43 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:53.833 16:12:43 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:53.833 16:12:43 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:53.833 16:12:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.833 16:12:43 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:53.833 16:12:43 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=102822 00:06:53.833 16:12:43 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:53.833 Waiting for target to run... 00:06:53.833 16:12:43 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:53.833 16:12:43 json_config -- json_config/common.sh@25 -- # waitforlisten 102822 /var/tmp/spdk_tgt.sock 00:06:53.833 16:12:43 json_config -- common/autotest_common.sh@835 -- # '[' -z 102822 ']' 00:06:53.833 16:12:43 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:53.833 16:12:43 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.833 16:12:43 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:53.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:53.833 16:12:43 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.833 16:12:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:53.833 [2024-11-19 16:12:43.995711] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:06:53.833 [2024-11-19 16:12:43.995804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid102822 ] 00:06:54.403 [2024-11-19 16:12:44.520979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.403 [2024-11-19 16:12:44.562500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.691 [2024-11-19 16:12:47.605581] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:57.691 [2024-11-19 16:12:47.638045] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:57.691 16:12:47 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.691 16:12:47 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:57.691 16:12:47 json_config -- json_config/common.sh@26 -- # echo '' 00:06:57.691 00:06:57.691 16:12:47 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:57.691 16:12:47 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:57.691 INFO: Checking if target configuration is the same... 00:06:57.691 16:12:47 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:57.691 16:12:47 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:57.691 16:12:47 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:57.691 + '[' 2 -ne 2 ']' 00:06:57.691 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:57.691 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:57.691 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:57.691 +++ basename /dev/fd/62 00:06:57.691 ++ mktemp /tmp/62.XXX 00:06:57.691 + tmp_file_1=/tmp/62.a50 00:06:57.691 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:57.691 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:57.691 + tmp_file_2=/tmp/spdk_tgt_config.json.Ji2 00:06:57.691 + ret=0 00:06:57.691 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:57.949 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:57.949 + diff -u /tmp/62.a50 /tmp/spdk_tgt_config.json.Ji2 00:06:57.949 + echo 'INFO: JSON config files are the same' 00:06:57.949 INFO: JSON config files are the same 00:06:57.949 + rm /tmp/62.a50 /tmp/spdk_tgt_config.json.Ji2 00:06:57.949 + exit 0 00:06:57.949 16:12:48 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:57.949 16:12:48 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:57.949 INFO: changing configuration and checking if this can be detected... 00:06:57.949 16:12:48 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:57.949 16:12:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:58.208 16:12:48 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:58.208 16:12:48 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:58.208 16:12:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:58.208 + '[' 2 -ne 2 ']' 00:06:58.208 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:58.208 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:58.208 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:58.208 +++ basename /dev/fd/62 00:06:58.208 ++ mktemp /tmp/62.XXX 00:06:58.208 + tmp_file_1=/tmp/62.wRK 00:06:58.208 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:58.208 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:58.208 + tmp_file_2=/tmp/spdk_tgt_config.json.99B 00:06:58.208 + ret=0 00:06:58.208 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:58.466 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:58.725 + diff -u /tmp/62.wRK /tmp/spdk_tgt_config.json.99B 00:06:58.725 + ret=1 00:06:58.725 + echo '=== Start of file: /tmp/62.wRK ===' 00:06:58.725 + cat /tmp/62.wRK 00:06:58.725 + echo '=== End of file: /tmp/62.wRK ===' 00:06:58.725 + echo '' 00:06:58.725 + echo '=== Start of file: /tmp/spdk_tgt_config.json.99B ===' 00:06:58.725 + cat /tmp/spdk_tgt_config.json.99B 00:06:58.725 + echo '=== End of file: /tmp/spdk_tgt_config.json.99B ===' 00:06:58.725 + echo '' 00:06:58.725 + rm /tmp/62.wRK /tmp/spdk_tgt_config.json.99B 00:06:58.725 + exit 1 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:58.725 INFO: configuration change detected. 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@324 -- # [[ -n 102822 ]] 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:58.725 16:12:48 json_config -- json_config/json_config.sh@330 -- # killprocess 102822 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@954 -- # '[' -z 102822 ']' 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@958 -- # kill -0 102822 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@959 -- # uname 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 102822 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 102822' 00:06:58.725 killing process with pid 102822 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@973 -- # kill 102822 00:06:58.725 16:12:48 json_config -- common/autotest_common.sh@978 -- # wait 102822 00:07:00.629 16:12:50 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:07:00.629 16:12:50 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:07:00.629 16:12:50 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.629 16:12:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.629 16:12:50 json_config -- json_config/json_config.sh@335 -- # return 0 00:07:00.629 16:12:50 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:07:00.629 INFO: Success 00:07:00.629 00:07:00.629 real 0m16.412s 00:07:00.629 user 0m18.499s 00:07:00.629 sys 0m2.028s 00:07:00.629 16:12:50 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:00.629 16:12:50 json_config -- common/autotest_common.sh@10 -- # set +x 00:07:00.629 ************************************ 00:07:00.629 END TEST json_config 00:07:00.629 ************************************ 00:07:00.629 16:12:50 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:00.629 16:12:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:00.629 16:12:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:00.629 16:12:50 -- common/autotest_common.sh@10 -- # set +x 00:07:00.629 ************************************ 00:07:00.629 START TEST json_config_extra_key 00:07:00.629 ************************************ 00:07:00.629 16:12:50 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:07:00.629 16:12:50 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.629 16:12:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.629 16:12:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.629 16:12:50 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.629 16:12:50 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:07:00.629 16:12:50 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.629 16:12:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.629 --rc genhtml_branch_coverage=1 00:07:00.629 --rc genhtml_function_coverage=1 00:07:00.629 --rc genhtml_legend=1 00:07:00.629 --rc geninfo_all_blocks=1 00:07:00.629 --rc geninfo_unexecuted_blocks=1 00:07:00.629 00:07:00.629 ' 00:07:00.629 16:12:50 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.629 --rc genhtml_branch_coverage=1 00:07:00.629 --rc genhtml_function_coverage=1 00:07:00.629 --rc genhtml_legend=1 00:07:00.629 --rc geninfo_all_blocks=1 00:07:00.629 --rc geninfo_unexecuted_blocks=1 00:07:00.629 00:07:00.629 ' 00:07:00.629 16:12:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.629 --rc genhtml_branch_coverage=1 00:07:00.629 --rc genhtml_function_coverage=1 00:07:00.629 --rc genhtml_legend=1 00:07:00.629 --rc geninfo_all_blocks=1 00:07:00.629 --rc geninfo_unexecuted_blocks=1 00:07:00.629 00:07:00.629 ' 00:07:00.629 16:12:50 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.629 --rc genhtml_branch_coverage=1 00:07:00.629 --rc genhtml_function_coverage=1 00:07:00.629 --rc genhtml_legend=1 00:07:00.629 --rc geninfo_all_blocks=1 00:07:00.629 --rc geninfo_unexecuted_blocks=1 00:07:00.629 00:07:00.629 ' 00:07:00.629 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:00.629 16:12:50 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:07:00.629 16:12:50 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:00.630 16:12:50 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:07:00.630 16:12:50 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:00.630 16:12:50 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:00.630 16:12:50 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:00.630 16:12:50 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.630 16:12:50 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.630 16:12:50 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.630 16:12:50 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:07:00.630 16:12:50 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:00.630 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:00.630 16:12:50 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:07:00.630 INFO: launching applications... 00:07:00.630 16:12:50 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=103766 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:07:00.630 Waiting for target to run... 00:07:00.630 16:12:50 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 103766 /var/tmp/spdk_tgt.sock 00:07:00.630 16:12:50 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 103766 ']' 00:07:00.630 16:12:50 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:07:00.630 16:12:50 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.630 16:12:50 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:07:00.630 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:07:00.630 16:12:50 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.630 16:12:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:00.630 [2024-11-19 16:12:50.763766] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:00.630 [2024-11-19 16:12:50.763874] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103766 ] 00:07:00.896 [2024-11-19 16:12:51.126303] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.896 [2024-11-19 16:12:51.156947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.463 16:12:51 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.463 16:12:51 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:07:01.463 16:12:51 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:07:01.463 00:07:01.463 16:12:51 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:07:01.463 INFO: shutting down applications... 00:07:01.463 16:12:51 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:07:01.463 16:12:51 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:07:01.463 16:12:51 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:07:01.463 16:12:51 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 103766 ]] 00:07:01.463 16:12:51 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 103766 00:07:01.463 16:12:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:07:01.463 16:12:51 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:01.463 16:12:51 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103766 00:07:01.463 16:12:51 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:07:02.031 16:12:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:07:02.031 16:12:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:07:02.031 16:12:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 103766 00:07:02.031 16:12:52 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:07:02.031 16:12:52 json_config_extra_key -- json_config/common.sh@43 -- # break 00:07:02.031 16:12:52 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:07:02.031 16:12:52 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:07:02.031 SPDK target shutdown done 00:07:02.031 16:12:52 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:07:02.031 Success 00:07:02.031 00:07:02.031 real 0m1.706s 00:07:02.031 user 0m1.686s 00:07:02.031 sys 0m0.449s 00:07:02.031 16:12:52 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.031 16:12:52 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:07:02.031 ************************************ 00:07:02.031 END TEST json_config_extra_key 00:07:02.031 ************************************ 00:07:02.031 16:12:52 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:02.031 16:12:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.031 16:12:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.031 16:12:52 -- common/autotest_common.sh@10 -- # set +x 00:07:02.031 ************************************ 00:07:02.031 START TEST alias_rpc 00:07:02.031 ************************************ 00:07:02.031 16:12:52 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:07:02.031 * Looking for test storage... 00:07:02.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@345 -- # : 1 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.291 16:12:52 alias_rpc -- scripts/common.sh@368 -- # return 0 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.291 --rc genhtml_branch_coverage=1 00:07:02.291 --rc genhtml_function_coverage=1 00:07:02.291 --rc genhtml_legend=1 00:07:02.291 --rc geninfo_all_blocks=1 00:07:02.291 --rc geninfo_unexecuted_blocks=1 00:07:02.291 00:07:02.291 ' 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.291 --rc genhtml_branch_coverage=1 00:07:02.291 --rc genhtml_function_coverage=1 00:07:02.291 --rc genhtml_legend=1 00:07:02.291 --rc geninfo_all_blocks=1 00:07:02.291 --rc geninfo_unexecuted_blocks=1 00:07:02.291 00:07:02.291 ' 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.291 --rc genhtml_branch_coverage=1 00:07:02.291 --rc genhtml_function_coverage=1 00:07:02.291 --rc genhtml_legend=1 00:07:02.291 --rc geninfo_all_blocks=1 00:07:02.291 --rc geninfo_unexecuted_blocks=1 00:07:02.291 00:07:02.291 ' 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.291 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.291 --rc genhtml_branch_coverage=1 00:07:02.291 --rc genhtml_function_coverage=1 00:07:02.291 --rc genhtml_legend=1 00:07:02.291 --rc geninfo_all_blocks=1 00:07:02.291 --rc geninfo_unexecuted_blocks=1 00:07:02.291 00:07:02.291 ' 00:07:02.291 16:12:52 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:02.291 16:12:52 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=103966 00:07:02.291 16:12:52 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:02.291 16:12:52 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 103966 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 103966 ']' 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.291 16:12:52 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.291 [2024-11-19 16:12:52.517279] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:02.291 [2024-11-19 16:12:52.517382] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid103966 ] 00:07:02.291 [2024-11-19 16:12:52.584499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.550 [2024-11-19 16:12:52.634188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.550 16:12:52 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.550 16:12:52 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:02.550 16:12:52 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:07:03.117 16:12:53 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 103966 00:07:03.117 16:12:53 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 103966 ']' 00:07:03.117 16:12:53 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 103966 00:07:03.117 16:12:53 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:07:03.117 16:12:53 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:03.117 16:12:53 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 103966 00:07:03.117 16:12:53 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:03.117 16:12:53 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:03.117 16:12:53 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 103966' 00:07:03.117 killing process with pid 103966 00:07:03.117 16:12:53 alias_rpc -- common/autotest_common.sh@973 -- # kill 103966 00:07:03.117 16:12:53 alias_rpc -- common/autotest_common.sh@978 -- # wait 103966 00:07:03.376 00:07:03.376 real 0m1.268s 00:07:03.376 user 0m1.416s 00:07:03.376 sys 0m0.403s 00:07:03.376 16:12:53 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.376 16:12:53 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.376 ************************************ 00:07:03.376 END TEST alias_rpc 00:07:03.376 ************************************ 00:07:03.376 16:12:53 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:07:03.376 16:12:53 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:03.376 16:12:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:03.376 16:12:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.376 16:12:53 -- common/autotest_common.sh@10 -- # set +x 00:07:03.376 ************************************ 00:07:03.376 START TEST spdkcli_tcp 00:07:03.376 ************************************ 00:07:03.376 16:12:53 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:03.376 * Looking for test storage... 00:07:03.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:03.376 16:12:53 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:03.376 16:12:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:03.376 16:12:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:03.635 16:12:53 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.635 16:12:53 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:03.635 16:12:53 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.635 16:12:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:03.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.635 --rc genhtml_branch_coverage=1 00:07:03.635 --rc genhtml_function_coverage=1 00:07:03.635 --rc genhtml_legend=1 00:07:03.635 --rc geninfo_all_blocks=1 00:07:03.635 --rc geninfo_unexecuted_blocks=1 00:07:03.635 00:07:03.635 ' 00:07:03.635 16:12:53 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:03.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.635 --rc genhtml_branch_coverage=1 00:07:03.635 --rc genhtml_function_coverage=1 00:07:03.635 --rc genhtml_legend=1 00:07:03.635 --rc geninfo_all_blocks=1 00:07:03.635 --rc geninfo_unexecuted_blocks=1 00:07:03.635 00:07:03.635 ' 00:07:03.635 16:12:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:03.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.635 --rc genhtml_branch_coverage=1 00:07:03.635 --rc genhtml_function_coverage=1 00:07:03.635 --rc genhtml_legend=1 00:07:03.635 --rc geninfo_all_blocks=1 00:07:03.635 --rc geninfo_unexecuted_blocks=1 00:07:03.635 00:07:03.635 ' 00:07:03.635 16:12:53 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:03.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.635 --rc genhtml_branch_coverage=1 00:07:03.635 --rc genhtml_function_coverage=1 00:07:03.635 --rc genhtml_legend=1 00:07:03.635 --rc geninfo_all_blocks=1 00:07:03.635 --rc geninfo_unexecuted_blocks=1 00:07:03.635 00:07:03.635 ' 00:07:03.635 16:12:53 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:03.635 16:12:53 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:03.635 16:12:53 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:03.635 16:12:53 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:03.635 16:12:53 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:03.635 16:12:53 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:03.635 16:12:53 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:03.635 16:12:53 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:03.635 16:12:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:03.635 16:12:53 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=104251 00:07:03.636 16:12:53 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:03.636 16:12:53 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 104251 00:07:03.636 16:12:53 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 104251 ']' 00:07:03.636 16:12:53 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.636 16:12:53 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.636 16:12:53 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.636 16:12:53 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.636 16:12:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:03.636 [2024-11-19 16:12:53.836308] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:03.636 [2024-11-19 16:12:53.836405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104251 ] 00:07:03.636 [2024-11-19 16:12:53.903817] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:03.636 [2024-11-19 16:12:53.950640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.636 [2024-11-19 16:12:53.950644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.895 16:12:54 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.895 16:12:54 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:03.895 16:12:54 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=104285 00:07:03.895 16:12:54 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:03.895 16:12:54 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:04.154 [ 00:07:04.154 "bdev_malloc_delete", 00:07:04.154 "bdev_malloc_create", 00:07:04.154 "bdev_null_resize", 00:07:04.154 "bdev_null_delete", 00:07:04.154 "bdev_null_create", 00:07:04.154 "bdev_nvme_cuse_unregister", 00:07:04.154 "bdev_nvme_cuse_register", 00:07:04.154 "bdev_opal_new_user", 00:07:04.154 "bdev_opal_set_lock_state", 00:07:04.154 "bdev_opal_delete", 00:07:04.154 "bdev_opal_get_info", 00:07:04.154 "bdev_opal_create", 00:07:04.154 "bdev_nvme_opal_revert", 00:07:04.154 "bdev_nvme_opal_init", 00:07:04.154 "bdev_nvme_send_cmd", 00:07:04.154 "bdev_nvme_set_keys", 00:07:04.154 "bdev_nvme_get_path_iostat", 00:07:04.154 "bdev_nvme_get_mdns_discovery_info", 00:07:04.154 "bdev_nvme_stop_mdns_discovery", 00:07:04.154 "bdev_nvme_start_mdns_discovery", 00:07:04.154 "bdev_nvme_set_multipath_policy", 00:07:04.154 "bdev_nvme_set_preferred_path", 00:07:04.154 "bdev_nvme_get_io_paths", 00:07:04.154 "bdev_nvme_remove_error_injection", 00:07:04.154 "bdev_nvme_add_error_injection", 00:07:04.154 "bdev_nvme_get_discovery_info", 00:07:04.154 "bdev_nvme_stop_discovery", 00:07:04.154 "bdev_nvme_start_discovery", 00:07:04.154 "bdev_nvme_get_controller_health_info", 00:07:04.154 "bdev_nvme_disable_controller", 00:07:04.154 "bdev_nvme_enable_controller", 00:07:04.154 "bdev_nvme_reset_controller", 00:07:04.154 "bdev_nvme_get_transport_statistics", 00:07:04.154 "bdev_nvme_apply_firmware", 00:07:04.154 "bdev_nvme_detach_controller", 00:07:04.154 "bdev_nvme_get_controllers", 00:07:04.154 "bdev_nvme_attach_controller", 00:07:04.154 "bdev_nvme_set_hotplug", 00:07:04.154 "bdev_nvme_set_options", 00:07:04.154 "bdev_passthru_delete", 00:07:04.154 "bdev_passthru_create", 00:07:04.154 "bdev_lvol_set_parent_bdev", 00:07:04.154 "bdev_lvol_set_parent", 00:07:04.154 "bdev_lvol_check_shallow_copy", 00:07:04.154 "bdev_lvol_start_shallow_copy", 00:07:04.154 "bdev_lvol_grow_lvstore", 00:07:04.154 "bdev_lvol_get_lvols", 00:07:04.154 "bdev_lvol_get_lvstores", 00:07:04.154 "bdev_lvol_delete", 00:07:04.154 "bdev_lvol_set_read_only", 00:07:04.154 "bdev_lvol_resize", 00:07:04.154 "bdev_lvol_decouple_parent", 00:07:04.154 "bdev_lvol_inflate", 00:07:04.154 "bdev_lvol_rename", 00:07:04.154 "bdev_lvol_clone_bdev", 00:07:04.154 "bdev_lvol_clone", 00:07:04.154 "bdev_lvol_snapshot", 00:07:04.154 "bdev_lvol_create", 00:07:04.154 "bdev_lvol_delete_lvstore", 00:07:04.154 "bdev_lvol_rename_lvstore", 00:07:04.154 "bdev_lvol_create_lvstore", 00:07:04.154 "bdev_raid_set_options", 00:07:04.154 "bdev_raid_remove_base_bdev", 00:07:04.154 "bdev_raid_add_base_bdev", 00:07:04.154 "bdev_raid_delete", 00:07:04.154 "bdev_raid_create", 00:07:04.154 "bdev_raid_get_bdevs", 00:07:04.154 "bdev_error_inject_error", 00:07:04.154 "bdev_error_delete", 00:07:04.154 "bdev_error_create", 00:07:04.154 "bdev_split_delete", 00:07:04.154 "bdev_split_create", 00:07:04.154 "bdev_delay_delete", 00:07:04.155 "bdev_delay_create", 00:07:04.155 "bdev_delay_update_latency", 00:07:04.155 "bdev_zone_block_delete", 00:07:04.155 "bdev_zone_block_create", 00:07:04.155 "blobfs_create", 00:07:04.155 "blobfs_detect", 00:07:04.155 "blobfs_set_cache_size", 00:07:04.155 "bdev_aio_delete", 00:07:04.155 "bdev_aio_rescan", 00:07:04.155 "bdev_aio_create", 00:07:04.155 "bdev_ftl_set_property", 00:07:04.155 "bdev_ftl_get_properties", 00:07:04.155 "bdev_ftl_get_stats", 00:07:04.155 "bdev_ftl_unmap", 00:07:04.155 "bdev_ftl_unload", 00:07:04.155 "bdev_ftl_delete", 00:07:04.155 "bdev_ftl_load", 00:07:04.155 "bdev_ftl_create", 00:07:04.155 "bdev_virtio_attach_controller", 00:07:04.155 "bdev_virtio_scsi_get_devices", 00:07:04.155 "bdev_virtio_detach_controller", 00:07:04.155 "bdev_virtio_blk_set_hotplug", 00:07:04.155 "bdev_iscsi_delete", 00:07:04.155 "bdev_iscsi_create", 00:07:04.155 "bdev_iscsi_set_options", 00:07:04.155 "accel_error_inject_error", 00:07:04.155 "ioat_scan_accel_module", 00:07:04.155 "dsa_scan_accel_module", 00:07:04.155 "iaa_scan_accel_module", 00:07:04.155 "vfu_virtio_create_fs_endpoint", 00:07:04.155 "vfu_virtio_create_scsi_endpoint", 00:07:04.155 "vfu_virtio_scsi_remove_target", 00:07:04.155 "vfu_virtio_scsi_add_target", 00:07:04.155 "vfu_virtio_create_blk_endpoint", 00:07:04.155 "vfu_virtio_delete_endpoint", 00:07:04.155 "keyring_file_remove_key", 00:07:04.155 "keyring_file_add_key", 00:07:04.155 "keyring_linux_set_options", 00:07:04.155 "fsdev_aio_delete", 00:07:04.155 "fsdev_aio_create", 00:07:04.155 "iscsi_get_histogram", 00:07:04.155 "iscsi_enable_histogram", 00:07:04.155 "iscsi_set_options", 00:07:04.155 "iscsi_get_auth_groups", 00:07:04.155 "iscsi_auth_group_remove_secret", 00:07:04.155 "iscsi_auth_group_add_secret", 00:07:04.155 "iscsi_delete_auth_group", 00:07:04.155 "iscsi_create_auth_group", 00:07:04.155 "iscsi_set_discovery_auth", 00:07:04.155 "iscsi_get_options", 00:07:04.155 "iscsi_target_node_request_logout", 00:07:04.155 "iscsi_target_node_set_redirect", 00:07:04.155 "iscsi_target_node_set_auth", 00:07:04.155 "iscsi_target_node_add_lun", 00:07:04.155 "iscsi_get_stats", 00:07:04.155 "iscsi_get_connections", 00:07:04.155 "iscsi_portal_group_set_auth", 00:07:04.155 "iscsi_start_portal_group", 00:07:04.155 "iscsi_delete_portal_group", 00:07:04.155 "iscsi_create_portal_group", 00:07:04.155 "iscsi_get_portal_groups", 00:07:04.155 "iscsi_delete_target_node", 00:07:04.155 "iscsi_target_node_remove_pg_ig_maps", 00:07:04.155 "iscsi_target_node_add_pg_ig_maps", 00:07:04.155 "iscsi_create_target_node", 00:07:04.155 "iscsi_get_target_nodes", 00:07:04.155 "iscsi_delete_initiator_group", 00:07:04.155 "iscsi_initiator_group_remove_initiators", 00:07:04.155 "iscsi_initiator_group_add_initiators", 00:07:04.155 "iscsi_create_initiator_group", 00:07:04.155 "iscsi_get_initiator_groups", 00:07:04.155 "nvmf_set_crdt", 00:07:04.155 "nvmf_set_config", 00:07:04.155 "nvmf_set_max_subsystems", 00:07:04.155 "nvmf_stop_mdns_prr", 00:07:04.155 "nvmf_publish_mdns_prr", 00:07:04.155 "nvmf_subsystem_get_listeners", 00:07:04.155 "nvmf_subsystem_get_qpairs", 00:07:04.155 "nvmf_subsystem_get_controllers", 00:07:04.155 "nvmf_get_stats", 00:07:04.155 "nvmf_get_transports", 00:07:04.155 "nvmf_create_transport", 00:07:04.155 "nvmf_get_targets", 00:07:04.155 "nvmf_delete_target", 00:07:04.155 "nvmf_create_target", 00:07:04.155 "nvmf_subsystem_allow_any_host", 00:07:04.155 "nvmf_subsystem_set_keys", 00:07:04.155 "nvmf_subsystem_remove_host", 00:07:04.155 "nvmf_subsystem_add_host", 00:07:04.155 "nvmf_ns_remove_host", 00:07:04.155 "nvmf_ns_add_host", 00:07:04.155 "nvmf_subsystem_remove_ns", 00:07:04.155 "nvmf_subsystem_set_ns_ana_group", 00:07:04.155 "nvmf_subsystem_add_ns", 00:07:04.155 "nvmf_subsystem_listener_set_ana_state", 00:07:04.155 "nvmf_discovery_get_referrals", 00:07:04.155 "nvmf_discovery_remove_referral", 00:07:04.155 "nvmf_discovery_add_referral", 00:07:04.155 "nvmf_subsystem_remove_listener", 00:07:04.155 "nvmf_subsystem_add_listener", 00:07:04.155 "nvmf_delete_subsystem", 00:07:04.155 "nvmf_create_subsystem", 00:07:04.155 "nvmf_get_subsystems", 00:07:04.155 "env_dpdk_get_mem_stats", 00:07:04.155 "nbd_get_disks", 00:07:04.155 "nbd_stop_disk", 00:07:04.155 "nbd_start_disk", 00:07:04.155 "ublk_recover_disk", 00:07:04.155 "ublk_get_disks", 00:07:04.155 "ublk_stop_disk", 00:07:04.155 "ublk_start_disk", 00:07:04.155 "ublk_destroy_target", 00:07:04.155 "ublk_create_target", 00:07:04.155 "virtio_blk_create_transport", 00:07:04.155 "virtio_blk_get_transports", 00:07:04.155 "vhost_controller_set_coalescing", 00:07:04.155 "vhost_get_controllers", 00:07:04.155 "vhost_delete_controller", 00:07:04.155 "vhost_create_blk_controller", 00:07:04.155 "vhost_scsi_controller_remove_target", 00:07:04.155 "vhost_scsi_controller_add_target", 00:07:04.155 "vhost_start_scsi_controller", 00:07:04.155 "vhost_create_scsi_controller", 00:07:04.155 "thread_set_cpumask", 00:07:04.155 "scheduler_set_options", 00:07:04.155 "framework_get_governor", 00:07:04.155 "framework_get_scheduler", 00:07:04.155 "framework_set_scheduler", 00:07:04.155 "framework_get_reactors", 00:07:04.155 "thread_get_io_channels", 00:07:04.155 "thread_get_pollers", 00:07:04.155 "thread_get_stats", 00:07:04.155 "framework_monitor_context_switch", 00:07:04.155 "spdk_kill_instance", 00:07:04.155 "log_enable_timestamps", 00:07:04.155 "log_get_flags", 00:07:04.155 "log_clear_flag", 00:07:04.155 "log_set_flag", 00:07:04.155 "log_get_level", 00:07:04.155 "log_set_level", 00:07:04.155 "log_get_print_level", 00:07:04.155 "log_set_print_level", 00:07:04.155 "framework_enable_cpumask_locks", 00:07:04.155 "framework_disable_cpumask_locks", 00:07:04.155 "framework_wait_init", 00:07:04.155 "framework_start_init", 00:07:04.155 "scsi_get_devices", 00:07:04.155 "bdev_get_histogram", 00:07:04.155 "bdev_enable_histogram", 00:07:04.155 "bdev_set_qos_limit", 00:07:04.155 "bdev_set_qd_sampling_period", 00:07:04.155 "bdev_get_bdevs", 00:07:04.155 "bdev_reset_iostat", 00:07:04.155 "bdev_get_iostat", 00:07:04.155 "bdev_examine", 00:07:04.155 "bdev_wait_for_examine", 00:07:04.155 "bdev_set_options", 00:07:04.155 "accel_get_stats", 00:07:04.155 "accel_set_options", 00:07:04.155 "accel_set_driver", 00:07:04.155 "accel_crypto_key_destroy", 00:07:04.155 "accel_crypto_keys_get", 00:07:04.155 "accel_crypto_key_create", 00:07:04.155 "accel_assign_opc", 00:07:04.155 "accel_get_module_info", 00:07:04.155 "accel_get_opc_assignments", 00:07:04.155 "vmd_rescan", 00:07:04.155 "vmd_remove_device", 00:07:04.155 "vmd_enable", 00:07:04.155 "sock_get_default_impl", 00:07:04.155 "sock_set_default_impl", 00:07:04.155 "sock_impl_set_options", 00:07:04.155 "sock_impl_get_options", 00:07:04.155 "iobuf_get_stats", 00:07:04.155 "iobuf_set_options", 00:07:04.155 "keyring_get_keys", 00:07:04.155 "vfu_tgt_set_base_path", 00:07:04.155 "framework_get_pci_devices", 00:07:04.155 "framework_get_config", 00:07:04.155 "framework_get_subsystems", 00:07:04.155 "fsdev_set_opts", 00:07:04.155 "fsdev_get_opts", 00:07:04.155 "trace_get_info", 00:07:04.155 "trace_get_tpoint_group_mask", 00:07:04.155 "trace_disable_tpoint_group", 00:07:04.155 "trace_enable_tpoint_group", 00:07:04.155 "trace_clear_tpoint_mask", 00:07:04.155 "trace_set_tpoint_mask", 00:07:04.155 "notify_get_notifications", 00:07:04.155 "notify_get_types", 00:07:04.155 "spdk_get_version", 00:07:04.155 "rpc_get_methods" 00:07:04.155 ] 00:07:04.155 16:12:54 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:04.155 16:12:54 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:04.155 16:12:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.414 16:12:54 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:04.414 16:12:54 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 104251 00:07:04.414 16:12:54 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 104251 ']' 00:07:04.414 16:12:54 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 104251 00:07:04.414 16:12:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:04.414 16:12:54 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.414 16:12:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104251 00:07:04.414 16:12:54 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.414 16:12:54 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.414 16:12:54 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104251' 00:07:04.414 killing process with pid 104251 00:07:04.414 16:12:54 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 104251 00:07:04.414 16:12:54 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 104251 00:07:04.673 00:07:04.673 real 0m1.289s 00:07:04.673 user 0m2.303s 00:07:04.673 sys 0m0.486s 00:07:04.673 16:12:54 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:04.673 16:12:54 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:04.673 ************************************ 00:07:04.673 END TEST spdkcli_tcp 00:07:04.673 ************************************ 00:07:04.673 16:12:54 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:04.673 16:12:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:04.673 16:12:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:04.673 16:12:54 -- common/autotest_common.sh@10 -- # set +x 00:07:04.673 ************************************ 00:07:04.673 START TEST dpdk_mem_utility 00:07:04.673 ************************************ 00:07:04.673 16:12:54 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:04.933 * Looking for test storage... 00:07:04.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:04.933 16:12:55 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:04.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.933 --rc genhtml_branch_coverage=1 00:07:04.933 --rc genhtml_function_coverage=1 00:07:04.933 --rc genhtml_legend=1 00:07:04.933 --rc geninfo_all_blocks=1 00:07:04.933 --rc geninfo_unexecuted_blocks=1 00:07:04.933 00:07:04.933 ' 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:04.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.933 --rc genhtml_branch_coverage=1 00:07:04.933 --rc genhtml_function_coverage=1 00:07:04.933 --rc genhtml_legend=1 00:07:04.933 --rc geninfo_all_blocks=1 00:07:04.933 --rc geninfo_unexecuted_blocks=1 00:07:04.933 00:07:04.933 ' 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:04.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.933 --rc genhtml_branch_coverage=1 00:07:04.933 --rc genhtml_function_coverage=1 00:07:04.933 --rc genhtml_legend=1 00:07:04.933 --rc geninfo_all_blocks=1 00:07:04.933 --rc geninfo_unexecuted_blocks=1 00:07:04.933 00:07:04.933 ' 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:04.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:04.933 --rc genhtml_branch_coverage=1 00:07:04.933 --rc genhtml_function_coverage=1 00:07:04.933 --rc genhtml_legend=1 00:07:04.933 --rc geninfo_all_blocks=1 00:07:04.933 --rc geninfo_unexecuted_blocks=1 00:07:04.933 00:07:04.933 ' 00:07:04.933 16:12:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:04.933 16:12:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=104491 00:07:04.933 16:12:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:04.933 16:12:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 104491 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 104491 ']' 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:04.933 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:04.933 [2024-11-19 16:12:55.165249] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:04.933 [2024-11-19 16:12:55.165330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104491 ] 00:07:04.933 [2024-11-19 16:12:55.229415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.192 [2024-11-19 16:12:55.275794] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.192 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:05.192 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:05.192 16:12:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:05.192 16:12:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:05.192 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:05.192 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:05.192 { 00:07:05.192 "filename": "/tmp/spdk_mem_dump.txt" 00:07:05.192 } 00:07:05.192 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:05.192 16:12:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:05.451 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:05.451 1 heaps totaling size 810.000000 MiB 00:07:05.451 size: 810.000000 MiB heap id: 0 00:07:05.451 end heaps---------- 00:07:05.451 9 mempools totaling size 595.772034 MiB 00:07:05.451 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:05.451 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:05.451 size: 92.545471 MiB name: bdev_io_104491 00:07:05.451 size: 50.003479 MiB name: msgpool_104491 00:07:05.451 size: 36.509338 MiB name: fsdev_io_104491 00:07:05.451 size: 21.763794 MiB name: PDU_Pool 00:07:05.451 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:05.451 size: 4.133484 MiB name: evtpool_104491 00:07:05.451 size: 0.026123 MiB name: Session_Pool 00:07:05.451 end mempools------- 00:07:05.452 6 memzones totaling size 4.142822 MiB 00:07:05.452 size: 1.000366 MiB name: RG_ring_0_104491 00:07:05.452 size: 1.000366 MiB name: RG_ring_1_104491 00:07:05.452 size: 1.000366 MiB name: RG_ring_4_104491 00:07:05.452 size: 1.000366 MiB name: RG_ring_5_104491 00:07:05.452 size: 0.125366 MiB name: RG_ring_2_104491 00:07:05.452 size: 0.015991 MiB name: RG_ring_3_104491 00:07:05.452 end memzones------- 00:07:05.452 16:12:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:05.452 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:05.452 list of free elements. size: 10.862488 MiB 00:07:05.452 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:05.452 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:05.452 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:05.452 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:05.452 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:05.452 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:05.452 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:05.452 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:05.452 element at address: 0x20001a600000 with size: 0.582886 MiB 00:07:05.452 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:05.452 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:05.452 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:05.452 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:05.452 element at address: 0x200027a00000 with size: 0.410034 MiB 00:07:05.452 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:05.452 list of standard malloc elements. size: 199.218628 MiB 00:07:05.452 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:05.452 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:05.452 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:05.452 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:05.452 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:05.452 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:05.452 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:05.452 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:05.452 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:05.452 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:05.452 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:05.452 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:05.452 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:05.452 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:05.452 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:05.452 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:05.452 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:05.452 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:05.452 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:05.452 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:05.452 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:05.452 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:05.452 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:05.452 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:05.452 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:05.452 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:05.452 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:05.452 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:05.452 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:05.452 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200027a69040 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:05.452 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:05.452 list of memzone associated elements. size: 599.918884 MiB 00:07:05.452 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:05.452 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:05.452 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:05.452 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:05.452 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:05.452 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_104491_0 00:07:05.452 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:05.452 associated memzone info: size: 48.002930 MiB name: MP_msgpool_104491_0 00:07:05.452 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:05.452 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_104491_0 00:07:05.452 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:05.452 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:05.452 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:05.452 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:05.452 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:05.452 associated memzone info: size: 3.000122 MiB name: MP_evtpool_104491_0 00:07:05.452 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:05.452 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_104491 00:07:05.452 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:05.452 associated memzone info: size: 1.007996 MiB name: MP_evtpool_104491 00:07:05.452 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:05.452 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:05.452 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:05.452 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:05.452 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:05.452 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:05.452 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:05.452 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:05.452 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:05.452 associated memzone info: size: 1.000366 MiB name: RG_ring_0_104491 00:07:05.452 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:05.452 associated memzone info: size: 1.000366 MiB name: RG_ring_1_104491 00:07:05.452 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:05.452 associated memzone info: size: 1.000366 MiB name: RG_ring_4_104491 00:07:05.452 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:05.452 associated memzone info: size: 1.000366 MiB name: RG_ring_5_104491 00:07:05.452 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:05.452 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_104491 00:07:05.452 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:05.452 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_104491 00:07:05.452 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:05.452 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:05.452 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:05.452 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:05.452 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:05.452 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:05.452 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:05.452 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_104491 00:07:05.452 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:05.452 associated memzone info: size: 0.125366 MiB name: RG_ring_2_104491 00:07:05.452 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:05.452 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:05.452 element at address: 0x200027a69100 with size: 0.023743 MiB 00:07:05.452 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:05.452 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:05.452 associated memzone info: size: 0.015991 MiB name: RG_ring_3_104491 00:07:05.452 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:07:05.452 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:05.452 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:05.452 associated memzone info: size: 0.000183 MiB name: MP_msgpool_104491 00:07:05.452 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:05.452 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_104491 00:07:05.452 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:05.452 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_104491 00:07:05.452 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:07:05.452 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:05.452 16:12:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:05.452 16:12:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 104491 00:07:05.452 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 104491 ']' 00:07:05.452 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 104491 00:07:05.452 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:05.452 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.452 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104491 00:07:05.453 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:05.453 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:05.453 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104491' 00:07:05.453 killing process with pid 104491 00:07:05.453 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 104491 00:07:05.453 16:12:55 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 104491 00:07:06.020 00:07:06.021 real 0m1.086s 00:07:06.021 user 0m1.074s 00:07:06.021 sys 0m0.409s 00:07:06.021 16:12:56 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.021 16:12:56 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:06.021 ************************************ 00:07:06.021 END TEST dpdk_mem_utility 00:07:06.021 ************************************ 00:07:06.021 16:12:56 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:06.021 16:12:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.021 16:12:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.021 16:12:56 -- common/autotest_common.sh@10 -- # set +x 00:07:06.021 ************************************ 00:07:06.021 START TEST event 00:07:06.021 ************************************ 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:06.021 * Looking for test storage... 00:07:06.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.021 16:12:56 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.021 16:12:56 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.021 16:12:56 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.021 16:12:56 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.021 16:12:56 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.021 16:12:56 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.021 16:12:56 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.021 16:12:56 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.021 16:12:56 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.021 16:12:56 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.021 16:12:56 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.021 16:12:56 event -- scripts/common.sh@344 -- # case "$op" in 00:07:06.021 16:12:56 event -- scripts/common.sh@345 -- # : 1 00:07:06.021 16:12:56 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.021 16:12:56 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.021 16:12:56 event -- scripts/common.sh@365 -- # decimal 1 00:07:06.021 16:12:56 event -- scripts/common.sh@353 -- # local d=1 00:07:06.021 16:12:56 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.021 16:12:56 event -- scripts/common.sh@355 -- # echo 1 00:07:06.021 16:12:56 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.021 16:12:56 event -- scripts/common.sh@366 -- # decimal 2 00:07:06.021 16:12:56 event -- scripts/common.sh@353 -- # local d=2 00:07:06.021 16:12:56 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.021 16:12:56 event -- scripts/common.sh@355 -- # echo 2 00:07:06.021 16:12:56 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.021 16:12:56 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.021 16:12:56 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.021 16:12:56 event -- scripts/common.sh@368 -- # return 0 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.021 --rc genhtml_branch_coverage=1 00:07:06.021 --rc genhtml_function_coverage=1 00:07:06.021 --rc genhtml_legend=1 00:07:06.021 --rc geninfo_all_blocks=1 00:07:06.021 --rc geninfo_unexecuted_blocks=1 00:07:06.021 00:07:06.021 ' 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.021 --rc genhtml_branch_coverage=1 00:07:06.021 --rc genhtml_function_coverage=1 00:07:06.021 --rc genhtml_legend=1 00:07:06.021 --rc geninfo_all_blocks=1 00:07:06.021 --rc geninfo_unexecuted_blocks=1 00:07:06.021 00:07:06.021 ' 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.021 --rc genhtml_branch_coverage=1 00:07:06.021 --rc genhtml_function_coverage=1 00:07:06.021 --rc genhtml_legend=1 00:07:06.021 --rc geninfo_all_blocks=1 00:07:06.021 --rc geninfo_unexecuted_blocks=1 00:07:06.021 00:07:06.021 ' 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.021 --rc genhtml_branch_coverage=1 00:07:06.021 --rc genhtml_function_coverage=1 00:07:06.021 --rc genhtml_legend=1 00:07:06.021 --rc geninfo_all_blocks=1 00:07:06.021 --rc geninfo_unexecuted_blocks=1 00:07:06.021 00:07:06.021 ' 00:07:06.021 16:12:56 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:06.021 16:12:56 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:06.021 16:12:56 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:06.021 16:12:56 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.021 16:12:56 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.021 ************************************ 00:07:06.021 START TEST event_perf 00:07:06.021 ************************************ 00:07:06.021 16:12:56 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:06.021 Running I/O for 1 seconds...[2024-11-19 16:12:56.284811] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:06.021 [2024-11-19 16:12:56.284871] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104690 ] 00:07:06.021 [2024-11-19 16:12:56.352415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.280 [2024-11-19 16:12:56.405142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.280 [2024-11-19 16:12:56.405200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.280 [2024-11-19 16:12:56.405265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.280 [2024-11-19 16:12:56.405268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.214 Running I/O for 1 seconds... 00:07:07.214 lcore 0: 231311 00:07:07.214 lcore 1: 231311 00:07:07.214 lcore 2: 231311 00:07:07.214 lcore 3: 231311 00:07:07.214 done. 00:07:07.214 00:07:07.214 real 0m1.180s 00:07:07.214 user 0m4.098s 00:07:07.214 sys 0m0.076s 00:07:07.214 16:12:57 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.214 16:12:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:07.214 ************************************ 00:07:07.214 END TEST event_perf 00:07:07.214 ************************************ 00:07:07.214 16:12:57 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:07.214 16:12:57 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:07.214 16:12:57 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.214 16:12:57 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.214 ************************************ 00:07:07.214 START TEST event_reactor 00:07:07.214 ************************************ 00:07:07.214 16:12:57 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:07.214 [2024-11-19 16:12:57.514836] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:07.214 [2024-11-19 16:12:57.514903] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104851 ] 00:07:07.473 [2024-11-19 16:12:57.581679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.473 [2024-11-19 16:12:57.625792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.409 test_start 00:07:08.409 oneshot 00:07:08.409 tick 100 00:07:08.409 tick 100 00:07:08.409 tick 250 00:07:08.409 tick 100 00:07:08.409 tick 100 00:07:08.409 tick 100 00:07:08.409 tick 250 00:07:08.409 tick 500 00:07:08.409 tick 100 00:07:08.409 tick 100 00:07:08.409 tick 250 00:07:08.409 tick 100 00:07:08.409 tick 100 00:07:08.409 test_end 00:07:08.409 00:07:08.409 real 0m1.169s 00:07:08.409 user 0m1.096s 00:07:08.409 sys 0m0.069s 00:07:08.409 16:12:58 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:08.409 16:12:58 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:08.409 ************************************ 00:07:08.410 END TEST event_reactor 00:07:08.410 ************************************ 00:07:08.410 16:12:58 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:08.410 16:12:58 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:08.410 16:12:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:08.410 16:12:58 event -- common/autotest_common.sh@10 -- # set +x 00:07:08.410 ************************************ 00:07:08.410 START TEST event_reactor_perf 00:07:08.410 ************************************ 00:07:08.410 16:12:58 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:08.410 [2024-11-19 16:12:58.730574] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:08.410 [2024-11-19 16:12:58.730644] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105003 ] 00:07:08.669 [2024-11-19 16:12:58.796105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.669 [2024-11-19 16:12:58.839353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.605 test_start 00:07:09.605 test_end 00:07:09.605 Performance: 437641 events per second 00:07:09.605 00:07:09.605 real 0m1.166s 00:07:09.605 user 0m1.088s 00:07:09.605 sys 0m0.073s 00:07:09.605 16:12:59 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:09.605 16:12:59 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 ************************************ 00:07:09.605 END TEST event_reactor_perf 00:07:09.605 ************************************ 00:07:09.605 16:12:59 event -- event/event.sh@49 -- # uname -s 00:07:09.605 16:12:59 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:09.605 16:12:59 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:09.605 16:12:59 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:09.605 16:12:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:09.605 16:12:59 event -- common/autotest_common.sh@10 -- # set +x 00:07:09.605 ************************************ 00:07:09.605 START TEST event_scheduler 00:07:09.605 ************************************ 00:07:09.605 16:12:59 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:09.865 * Looking for test storage... 00:07:09.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:09.865 16:12:59 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:09.865 16:12:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:09.865 16:12:59 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:09.865 16:13:00 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:09.865 16:13:00 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:09.865 16:13:00 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:09.865 16:13:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:09.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.865 --rc genhtml_branch_coverage=1 00:07:09.865 --rc genhtml_function_coverage=1 00:07:09.865 --rc genhtml_legend=1 00:07:09.865 --rc geninfo_all_blocks=1 00:07:09.865 --rc geninfo_unexecuted_blocks=1 00:07:09.865 00:07:09.865 ' 00:07:09.865 16:13:00 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:09.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.865 --rc genhtml_branch_coverage=1 00:07:09.865 --rc genhtml_function_coverage=1 00:07:09.865 --rc genhtml_legend=1 00:07:09.865 --rc geninfo_all_blocks=1 00:07:09.865 --rc geninfo_unexecuted_blocks=1 00:07:09.865 00:07:09.865 ' 00:07:09.865 16:13:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:09.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.865 --rc genhtml_branch_coverage=1 00:07:09.865 --rc genhtml_function_coverage=1 00:07:09.865 --rc genhtml_legend=1 00:07:09.865 --rc geninfo_all_blocks=1 00:07:09.865 --rc geninfo_unexecuted_blocks=1 00:07:09.865 00:07:09.865 ' 00:07:09.865 16:13:00 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:09.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:09.865 --rc genhtml_branch_coverage=1 00:07:09.865 --rc genhtml_function_coverage=1 00:07:09.865 --rc genhtml_legend=1 00:07:09.865 --rc geninfo_all_blocks=1 00:07:09.865 --rc geninfo_unexecuted_blocks=1 00:07:09.865 00:07:09.865 ' 00:07:09.865 16:13:00 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:09.865 16:13:00 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=105193 00:07:09.866 16:13:00 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:09.866 16:13:00 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:09.866 16:13:00 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 105193 00:07:09.866 16:13:00 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 105193 ']' 00:07:09.866 16:13:00 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.866 16:13:00 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:09.866 16:13:00 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.866 16:13:00 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:09.866 16:13:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:09.866 [2024-11-19 16:13:00.141425] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:09.866 [2024-11-19 16:13:00.141525] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105193 ] 00:07:10.124 [2024-11-19 16:13:00.211859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:10.124 [2024-11-19 16:13:00.267007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.124 [2024-11-19 16:13:00.267078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.124 [2024-11-19 16:13:00.267138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:10.124 [2024-11-19 16:13:00.267141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:10.124 16:13:00 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.125 16:13:00 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:10.125 16:13:00 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:10.125 16:13:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.125 16:13:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.125 [2024-11-19 16:13:00.404165] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:10.125 [2024-11-19 16:13:00.404192] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:10.125 [2024-11-19 16:13:00.404209] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:10.125 [2024-11-19 16:13:00.404220] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:10.125 [2024-11-19 16:13:00.404230] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:10.125 16:13:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.125 16:13:00 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:10.125 16:13:00 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.125 16:13:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 [2024-11-19 16:13:00.503647] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:10.384 16:13:00 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.384 16:13:00 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:10.384 16:13:00 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.384 16:13:00 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.384 16:13:00 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 ************************************ 00:07:10.384 START TEST scheduler_create_thread 00:07:10.384 ************************************ 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 2 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 3 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 4 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 5 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 6 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 7 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.384 8 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.384 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 9 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 10 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:10.385 16:13:00 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.952 16:13:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.952 00:07:10.952 real 0m0.590s 00:07:10.952 user 0m0.009s 00:07:10.952 sys 0m0.004s 00:07:10.952 16:13:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.952 16:13:01 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.952 ************************************ 00:07:10.952 END TEST scheduler_create_thread 00:07:10.952 ************************************ 00:07:10.952 16:13:01 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:10.952 16:13:01 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 105193 00:07:10.952 16:13:01 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 105193 ']' 00:07:10.952 16:13:01 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 105193 00:07:10.952 16:13:01 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:10.952 16:13:01 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.952 16:13:01 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105193 00:07:10.952 16:13:01 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:10.952 16:13:01 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:10.952 16:13:01 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105193' 00:07:10.952 killing process with pid 105193 00:07:10.953 16:13:01 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 105193 00:07:10.953 16:13:01 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 105193 00:07:11.522 [2024-11-19 16:13:01.599748] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:11.522 00:07:11.522 real 0m1.848s 00:07:11.522 user 0m2.612s 00:07:11.522 sys 0m0.344s 00:07:11.522 16:13:01 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.522 16:13:01 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:11.522 ************************************ 00:07:11.522 END TEST event_scheduler 00:07:11.522 ************************************ 00:07:11.522 16:13:01 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:11.522 16:13:01 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:11.522 16:13:01 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.522 16:13:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.522 16:13:01 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.522 ************************************ 00:07:11.522 START TEST app_repeat 00:07:11.522 ************************************ 00:07:11.522 16:13:01 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@19 -- # repeat_pid=105505 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 105505' 00:07:11.522 Process app_repeat pid: 105505 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:11.522 spdk_app_start Round 0 00:07:11.522 16:13:01 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105505 /var/tmp/spdk-nbd.sock 00:07:11.522 16:13:01 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105505 ']' 00:07:11.522 16:13:01 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:11.522 16:13:01 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.522 16:13:01 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:11.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:11.522 16:13:01 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.522 16:13:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:11.782 [2024-11-19 16:13:01.871238] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:11.782 [2024-11-19 16:13:01.871303] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105505 ] 00:07:11.782 [2024-11-19 16:13:01.934549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:11.782 [2024-11-19 16:13:01.978921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:11.782 [2024-11-19 16:13:01.978925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.782 16:13:02 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.782 16:13:02 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:11.782 16:13:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.350 Malloc0 00:07:12.350 16:13:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:12.350 Malloc1 00:07:12.610 16:13:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.610 16:13:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:12.869 /dev/nbd0 00:07:12.869 16:13:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:12.869 16:13:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:12.869 1+0 records in 00:07:12.869 1+0 records out 00:07:12.869 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000218341 s, 18.8 MB/s 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:12.869 16:13:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:12.869 16:13:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:12.869 16:13:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:12.869 16:13:03 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:13.127 /dev/nbd1 00:07:13.127 16:13:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:13.127 16:13:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:13.127 1+0 records in 00:07:13.127 1+0 records out 00:07:13.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000248597 s, 16.5 MB/s 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:13.127 16:13:03 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:13.127 16:13:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:13.127 16:13:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:13.127 16:13:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:13.127 16:13:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.127 16:13:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:13.386 { 00:07:13.386 "nbd_device": "/dev/nbd0", 00:07:13.386 "bdev_name": "Malloc0" 00:07:13.386 }, 00:07:13.386 { 00:07:13.386 "nbd_device": "/dev/nbd1", 00:07:13.386 "bdev_name": "Malloc1" 00:07:13.386 } 00:07:13.386 ]' 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:13.386 { 00:07:13.386 "nbd_device": "/dev/nbd0", 00:07:13.386 "bdev_name": "Malloc0" 00:07:13.386 }, 00:07:13.386 { 00:07:13.386 "nbd_device": "/dev/nbd1", 00:07:13.386 "bdev_name": "Malloc1" 00:07:13.386 } 00:07:13.386 ]' 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:13.386 /dev/nbd1' 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:13.386 /dev/nbd1' 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:13.386 256+0 records in 00:07:13.386 256+0 records out 00:07:13.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00513741 s, 204 MB/s 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:13.386 256+0 records in 00:07:13.386 256+0 records out 00:07:13.386 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0197817 s, 53.0 MB/s 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:13.386 16:13:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:13.645 256+0 records in 00:07:13.645 256+0 records out 00:07:13.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227524 s, 46.1 MB/s 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.645 16:13:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:13.903 16:13:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:13.903 16:13:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:13.903 16:13:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:13.903 16:13:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:13.903 16:13:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:13.903 16:13:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:13.903 16:13:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:13.903 16:13:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:13.903 16:13:04 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:13.903 16:13:04 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:14.161 16:13:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:14.420 16:13:04 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:14.420 16:13:04 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:14.678 16:13:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:14.936 [2024-11-19 16:13:05.111922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:14.936 [2024-11-19 16:13:05.154783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.936 [2024-11-19 16:13:05.154783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.936 [2024-11-19 16:13:05.212189] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:14.936 [2024-11-19 16:13:05.212263] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:18.218 16:13:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:18.218 16:13:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:18.218 spdk_app_start Round 1 00:07:18.218 16:13:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105505 /var/tmp/spdk-nbd.sock 00:07:18.218 16:13:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105505 ']' 00:07:18.218 16:13:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.218 16:13:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:18.218 16:13:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.218 16:13:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:18.218 16:13:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:18.218 16:13:08 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:18.218 16:13:08 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:18.218 16:13:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.218 Malloc0 00:07:18.218 16:13:08 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:18.477 Malloc1 00:07:18.477 16:13:08 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:18.477 16:13:08 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:18.736 /dev/nbd0 00:07:19.006 16:13:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:19.006 16:13:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.006 1+0 records in 00:07:19.006 1+0 records out 00:07:19.006 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000197584 s, 20.7 MB/s 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.006 16:13:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:19.006 16:13:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.006 16:13:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.006 16:13:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:19.266 /dev/nbd1 00:07:19.266 16:13:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:19.266 16:13:09 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:19.266 1+0 records in 00:07:19.266 1+0 records out 00:07:19.266 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212557 s, 19.3 MB/s 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:19.266 16:13:09 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:19.266 16:13:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:19.266 16:13:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:19.266 16:13:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:19.266 16:13:09 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.266 16:13:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:19.525 { 00:07:19.525 "nbd_device": "/dev/nbd0", 00:07:19.525 "bdev_name": "Malloc0" 00:07:19.525 }, 00:07:19.525 { 00:07:19.525 "nbd_device": "/dev/nbd1", 00:07:19.525 "bdev_name": "Malloc1" 00:07:19.525 } 00:07:19.525 ]' 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:19.525 { 00:07:19.525 "nbd_device": "/dev/nbd0", 00:07:19.525 "bdev_name": "Malloc0" 00:07:19.525 }, 00:07:19.525 { 00:07:19.525 "nbd_device": "/dev/nbd1", 00:07:19.525 "bdev_name": "Malloc1" 00:07:19.525 } 00:07:19.525 ]' 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:19.525 /dev/nbd1' 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:19.525 /dev/nbd1' 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:19.525 16:13:09 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:19.526 256+0 records in 00:07:19.526 256+0 records out 00:07:19.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514931 s, 204 MB/s 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:19.526 256+0 records in 00:07:19.526 256+0 records out 00:07:19.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201649 s, 52.0 MB/s 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:19.526 256+0 records in 00:07:19.526 256+0 records out 00:07:19.526 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.021255 s, 49.3 MB/s 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.526 16:13:09 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:19.785 16:13:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:19.785 16:13:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:19.785 16:13:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:19.785 16:13:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:19.785 16:13:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:19.785 16:13:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:19.785 16:13:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:19.785 16:13:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:19.785 16:13:10 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:19.785 16:13:10 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:20.351 16:13:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:20.351 16:13:10 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:20.352 16:13:10 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:20.352 16:13:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:20.352 16:13:10 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:20.352 16:13:10 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:20.352 16:13:10 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:20.352 16:13:10 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:20.352 16:13:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:20.352 16:13:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:20.352 16:13:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:20.610 16:13:10 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:20.610 16:13:10 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:20.869 16:13:11 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:21.127 [2024-11-19 16:13:11.222298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:21.127 [2024-11-19 16:13:11.265348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.127 [2024-11-19 16:13:11.265352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.127 [2024-11-19 16:13:11.323964] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:21.127 [2024-11-19 16:13:11.324036] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:24.411 16:13:14 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:24.411 16:13:14 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:24.411 spdk_app_start Round 2 00:07:24.411 16:13:14 event.app_repeat -- event/event.sh@25 -- # waitforlisten 105505 /var/tmp/spdk-nbd.sock 00:07:24.411 16:13:14 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105505 ']' 00:07:24.411 16:13:14 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.411 16:13:14 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:24.411 16:13:14 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.411 16:13:14 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:24.411 16:13:14 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:24.411 16:13:14 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:24.411 16:13:14 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:24.411 16:13:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.411 Malloc0 00:07:24.411 16:13:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:24.670 Malloc1 00:07:24.670 16:13:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.670 16:13:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:24.929 /dev/nbd0 00:07:24.929 16:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:24.929 16:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:24.929 1+0 records in 00:07:24.929 1+0 records out 00:07:24.929 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158039 s, 25.9 MB/s 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:24.929 16:13:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:24.929 16:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.929 16:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:24.929 16:13:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:25.187 /dev/nbd1 00:07:25.187 16:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:25.187 16:13:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:25.187 1+0 records in 00:07:25.187 1+0 records out 00:07:25.187 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222423 s, 18.4 MB/s 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:25.187 16:13:15 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:25.187 16:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.187 16:13:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:25.187 16:13:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.187 16:13:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.187 16:13:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.446 16:13:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:25.446 { 00:07:25.446 "nbd_device": "/dev/nbd0", 00:07:25.446 "bdev_name": "Malloc0" 00:07:25.446 }, 00:07:25.446 { 00:07:25.446 "nbd_device": "/dev/nbd1", 00:07:25.446 "bdev_name": "Malloc1" 00:07:25.446 } 00:07:25.446 ]' 00:07:25.446 16:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:25.446 { 00:07:25.446 "nbd_device": "/dev/nbd0", 00:07:25.446 "bdev_name": "Malloc0" 00:07:25.446 }, 00:07:25.446 { 00:07:25.446 "nbd_device": "/dev/nbd1", 00:07:25.446 "bdev_name": "Malloc1" 00:07:25.447 } 00:07:25.447 ]' 00:07:25.447 16:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:25.705 /dev/nbd1' 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:25.705 /dev/nbd1' 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:25.705 256+0 records in 00:07:25.705 256+0 records out 00:07:25.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00454737 s, 231 MB/s 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:25.705 256+0 records in 00:07:25.705 256+0 records out 00:07:25.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202553 s, 51.8 MB/s 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:25.705 256+0 records in 00:07:25.705 256+0 records out 00:07:25.705 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216975 s, 48.3 MB/s 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.705 16:13:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:25.964 16:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:25.964 16:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:25.964 16:13:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:25.964 16:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:25.964 16:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:25.964 16:13:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:25.964 16:13:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:25.964 16:13:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:25.964 16:13:16 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:25.964 16:13:16 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:26.222 16:13:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:26.480 16:13:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:26.480 16:13:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:27.047 16:13:17 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:27.047 [2024-11-19 16:13:17.264462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:27.047 [2024-11-19 16:13:17.309174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:27.047 [2024-11-19 16:13:17.309174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.047 [2024-11-19 16:13:17.364300] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:27.047 [2024-11-19 16:13:17.364381] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:30.333 16:13:20 event.app_repeat -- event/event.sh@38 -- # waitforlisten 105505 /var/tmp/spdk-nbd.sock 00:07:30.333 16:13:20 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 105505 ']' 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:30.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:30.334 16:13:20 event.app_repeat -- event/event.sh@39 -- # killprocess 105505 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 105505 ']' 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 105505 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105505 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105505' 00:07:30.334 killing process with pid 105505 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@973 -- # kill 105505 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@978 -- # wait 105505 00:07:30.334 spdk_app_start is called in Round 0. 00:07:30.334 Shutdown signal received, stop current app iteration 00:07:30.334 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 reinitialization... 00:07:30.334 spdk_app_start is called in Round 1. 00:07:30.334 Shutdown signal received, stop current app iteration 00:07:30.334 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 reinitialization... 00:07:30.334 spdk_app_start is called in Round 2. 00:07:30.334 Shutdown signal received, stop current app iteration 00:07:30.334 Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 reinitialization... 00:07:30.334 spdk_app_start is called in Round 3. 00:07:30.334 Shutdown signal received, stop current app iteration 00:07:30.334 16:13:20 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:30.334 16:13:20 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:30.334 00:07:30.334 real 0m18.731s 00:07:30.334 user 0m41.428s 00:07:30.334 sys 0m3.295s 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.334 16:13:20 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:30.334 ************************************ 00:07:30.334 END TEST app_repeat 00:07:30.334 ************************************ 00:07:30.334 16:13:20 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:30.334 16:13:20 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:30.334 16:13:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.334 16:13:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.334 16:13:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:30.334 ************************************ 00:07:30.334 START TEST cpu_locks 00:07:30.334 ************************************ 00:07:30.334 16:13:20 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:30.334 * Looking for test storage... 00:07:30.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.604 16:13:20 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.604 --rc genhtml_branch_coverage=1 00:07:30.604 --rc genhtml_function_coverage=1 00:07:30.604 --rc genhtml_legend=1 00:07:30.604 --rc geninfo_all_blocks=1 00:07:30.604 --rc geninfo_unexecuted_blocks=1 00:07:30.604 00:07:30.604 ' 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.604 --rc genhtml_branch_coverage=1 00:07:30.604 --rc genhtml_function_coverage=1 00:07:30.604 --rc genhtml_legend=1 00:07:30.604 --rc geninfo_all_blocks=1 00:07:30.604 --rc geninfo_unexecuted_blocks=1 00:07:30.604 00:07:30.604 ' 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.604 --rc genhtml_branch_coverage=1 00:07:30.604 --rc genhtml_function_coverage=1 00:07:30.604 --rc genhtml_legend=1 00:07:30.604 --rc geninfo_all_blocks=1 00:07:30.604 --rc geninfo_unexecuted_blocks=1 00:07:30.604 00:07:30.604 ' 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.604 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.604 --rc genhtml_branch_coverage=1 00:07:30.604 --rc genhtml_function_coverage=1 00:07:30.604 --rc genhtml_legend=1 00:07:30.604 --rc geninfo_all_blocks=1 00:07:30.604 --rc geninfo_unexecuted_blocks=1 00:07:30.604 00:07:30.604 ' 00:07:30.604 16:13:20 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:30.604 16:13:20 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:30.604 16:13:20 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:30.604 16:13:20 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.604 16:13:20 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.604 ************************************ 00:07:30.604 START TEST default_locks 00:07:30.604 ************************************ 00:07:30.604 16:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:30.604 16:13:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=107929 00:07:30.604 16:13:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:30.604 16:13:20 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 107929 00:07:30.604 16:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 107929 ']' 00:07:30.604 16:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:30.605 16:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:30.605 16:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:30.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:30.605 16:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:30.605 16:13:20 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:30.605 [2024-11-19 16:13:20.839239] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:30.605 [2024-11-19 16:13:20.839314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107929 ] 00:07:30.605 [2024-11-19 16:13:20.904214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.863 [2024-11-19 16:13:20.953963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.121 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.121 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:31.121 16:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 107929 00:07:31.121 16:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 107929 00:07:31.121 16:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.380 lslocks: write error 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 107929 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 107929 ']' 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 107929 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 107929 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 107929' 00:07:31.380 killing process with pid 107929 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 107929 00:07:31.380 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 107929 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 107929 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 107929 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 107929 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 107929 ']' 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (107929) - No such process 00:07:31.639 ERROR: process (pid: 107929) is no longer running 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:31.639 00:07:31.639 real 0m1.123s 00:07:31.639 user 0m1.090s 00:07:31.639 sys 0m0.509s 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:31.639 16:13:21 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.639 ************************************ 00:07:31.639 END TEST default_locks 00:07:31.639 ************************************ 00:07:31.639 16:13:21 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:31.639 16:13:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:31.639 16:13:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:31.639 16:13:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:31.639 ************************************ 00:07:31.639 START TEST default_locks_via_rpc 00:07:31.639 ************************************ 00:07:31.639 16:13:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:31.639 16:13:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=108152 00:07:31.639 16:13:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:31.639 16:13:21 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 108152 00:07:31.639 16:13:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 108152 ']' 00:07:31.639 16:13:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.639 16:13:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:31.639 16:13:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.639 16:13:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:31.639 16:13:21 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:31.898 [2024-11-19 16:13:22.018994] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:31.898 [2024-11-19 16:13:22.019109] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108152 ] 00:07:31.898 [2024-11-19 16:13:22.084442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.898 [2024-11-19 16:13:22.133280] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 108152 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 108152 00:07:32.157 16:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 108152 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 108152 ']' 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 108152 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108152 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108152' 00:07:32.416 killing process with pid 108152 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 108152 00:07:32.416 16:13:22 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 108152 00:07:32.983 00:07:32.984 real 0m1.112s 00:07:32.984 user 0m1.077s 00:07:32.984 sys 0m0.496s 00:07:32.984 16:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.984 16:13:23 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:32.984 ************************************ 00:07:32.984 END TEST default_locks_via_rpc 00:07:32.984 ************************************ 00:07:32.984 16:13:23 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:32.984 16:13:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.984 16:13:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.984 16:13:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.984 ************************************ 00:07:32.984 START TEST non_locking_app_on_locked_coremask 00:07:32.984 ************************************ 00:07:32.984 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:32.984 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=108324 00:07:32.984 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:32.984 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 108324 /var/tmp/spdk.sock 00:07:32.984 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108324 ']' 00:07:32.984 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.984 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.984 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.984 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.984 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.984 [2024-11-19 16:13:23.176481] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:32.984 [2024-11-19 16:13:23.176571] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108324 ] 00:07:32.984 [2024-11-19 16:13:23.242148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.984 [2024-11-19 16:13:23.291208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=108333 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 108333 /var/tmp/spdk2.sock 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108333 ']' 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:33.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.243 16:13:23 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:33.502 [2024-11-19 16:13:23.596739] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:33.502 [2024-11-19 16:13:23.596817] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108333 ] 00:07:33.502 [2024-11-19 16:13:23.694174] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:33.502 [2024-11-19 16:13:23.694201] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.502 [2024-11-19 16:13:23.782385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.437 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.437 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:34.437 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 108324 00:07:34.437 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108324 00:07:34.437 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:34.696 lslocks: write error 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 108324 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108324 ']' 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108324 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108324 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108324' 00:07:34.696 killing process with pid 108324 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108324 00:07:34.696 16:13:24 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108324 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 108333 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108333 ']' 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 108333 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108333 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108333' 00:07:35.632 killing process with pid 108333 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 108333 00:07:35.632 16:13:25 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 108333 00:07:35.894 00:07:35.894 real 0m2.990s 00:07:35.894 user 0m3.212s 00:07:35.894 sys 0m0.988s 00:07:35.894 16:13:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:35.894 16:13:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.894 ************************************ 00:07:35.894 END TEST non_locking_app_on_locked_coremask 00:07:35.894 ************************************ 00:07:35.894 16:13:26 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:35.894 16:13:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:35.894 16:13:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:35.894 16:13:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:35.894 ************************************ 00:07:35.894 START TEST locking_app_on_unlocked_coremask 00:07:35.894 ************************************ 00:07:35.894 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:35.894 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=108638 00:07:35.894 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:35.894 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 108638 /var/tmp/spdk.sock 00:07:35.894 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108638 ']' 00:07:35.894 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.894 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.894 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.894 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.894 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.894 [2024-11-19 16:13:26.225723] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:35.894 [2024-11-19 16:13:26.225822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108638 ] 00:07:36.153 [2024-11-19 16:13:26.290062] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:36.153 [2024-11-19 16:13:26.290095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.153 [2024-11-19 16:13:26.334609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=108763 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 108763 /var/tmp/spdk2.sock 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 108763 ']' 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:36.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.412 16:13:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.412 [2024-11-19 16:13:26.657704] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:36.412 [2024-11-19 16:13:26.657804] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108763 ] 00:07:36.671 [2024-11-19 16:13:26.758360] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.671 [2024-11-19 16:13:26.846690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.240 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.240 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:37.240 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 108763 00:07:37.240 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 108763 00:07:37.240 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:37.806 lslocks: write error 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 108638 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108638 ']' 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108638 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108638 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108638' 00:07:37.806 killing process with pid 108638 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108638 00:07:37.806 16:13:27 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108638 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 108763 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 108763 ']' 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 108763 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108763 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108763' 00:07:38.374 killing process with pid 108763 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 108763 00:07:38.374 16:13:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 108763 00:07:38.941 00:07:38.941 real 0m2.918s 00:07:38.941 user 0m2.943s 00:07:38.941 sys 0m0.996s 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.941 ************************************ 00:07:38.941 END TEST locking_app_on_unlocked_coremask 00:07:38.941 ************************************ 00:07:38.941 16:13:29 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:38.941 16:13:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.941 16:13:29 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.941 16:13:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.941 ************************************ 00:07:38.941 START TEST locking_app_on_locked_coremask 00:07:38.941 ************************************ 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=109070 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 109070 /var/tmp/spdk.sock 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109070 ']' 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.941 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.941 [2024-11-19 16:13:29.194226] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:38.941 [2024-11-19 16:13:29.194319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109070 ] 00:07:38.941 [2024-11-19 16:13:29.260104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.200 [2024-11-19 16:13:29.310656] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=109075 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 109075 /var/tmp/spdk2.sock 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 109075 /var/tmp/spdk2.sock 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 109075 /var/tmp/spdk2.sock 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 109075 ']' 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.459 16:13:29 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.459 [2024-11-19 16:13:29.616530] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:39.459 [2024-11-19 16:13:29.616605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109075 ] 00:07:39.459 [2024-11-19 16:13:29.715262] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 109070 has claimed it. 00:07:39.459 [2024-11-19 16:13:29.715312] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:40.026 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (109075) - No such process 00:07:40.026 ERROR: process (pid: 109075) is no longer running 00:07:40.026 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.026 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:40.026 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:40.026 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:40.026 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:40.026 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:40.026 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 109070 00:07:40.026 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 109070 00:07:40.026 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:40.593 lslocks: write error 00:07:40.593 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 109070 00:07:40.593 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 109070 ']' 00:07:40.593 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 109070 00:07:40.593 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:40.593 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.593 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109070 00:07:40.593 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.593 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.593 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109070' 00:07:40.593 killing process with pid 109070 00:07:40.593 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 109070 00:07:40.594 16:13:30 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 109070 00:07:40.854 00:07:40.854 real 0m1.996s 00:07:40.854 user 0m2.202s 00:07:40.854 sys 0m0.674s 00:07:40.854 16:13:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.854 16:13:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.854 ************************************ 00:07:40.854 END TEST locking_app_on_locked_coremask 00:07:40.854 ************************************ 00:07:40.854 16:13:31 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:40.854 16:13:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.854 16:13:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.854 16:13:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.854 ************************************ 00:07:40.854 START TEST locking_overlapped_coremask 00:07:40.854 ************************************ 00:07:40.854 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:40.854 16:13:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=109363 00:07:40.854 16:13:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:40.854 16:13:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 109363 /var/tmp/spdk.sock 00:07:40.854 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109363 ']' 00:07:40.854 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.854 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.854 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.854 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.854 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.113 [2024-11-19 16:13:31.241257] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:41.114 [2024-11-19 16:13:31.241335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109363 ] 00:07:41.114 [2024-11-19 16:13:31.305497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:41.114 [2024-11-19 16:13:31.353332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.114 [2024-11-19 16:13:31.353355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.114 [2024-11-19 16:13:31.353358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.372 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=109373 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 109373 /var/tmp/spdk2.sock 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 109373 /var/tmp/spdk2.sock 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 109373 /var/tmp/spdk2.sock 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 109373 ']' 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:41.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:41.373 16:13:31 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:41.373 [2024-11-19 16:13:31.678662] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:41.373 [2024-11-19 16:13:31.678760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109373 ] 00:07:41.631 [2024-11-19 16:13:31.784423] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109363 has claimed it. 00:07:41.631 [2024-11-19 16:13:31.784489] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:42.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (109373) - No such process 00:07:42.199 ERROR: process (pid: 109373) is no longer running 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 109363 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 109363 ']' 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 109363 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109363 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109363' 00:07:42.199 killing process with pid 109363 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 109363 00:07:42.199 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 109363 00:07:42.767 00:07:42.767 real 0m1.621s 00:07:42.767 user 0m4.550s 00:07:42.767 sys 0m0.464s 00:07:42.767 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.767 16:13:32 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:42.767 ************************************ 00:07:42.767 END TEST locking_overlapped_coremask 00:07:42.767 ************************************ 00:07:42.767 16:13:32 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:42.767 16:13:32 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:42.767 16:13:32 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.767 16:13:32 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:42.767 ************************************ 00:07:42.767 START TEST locking_overlapped_coremask_via_rpc 00:07:42.767 ************************************ 00:07:42.767 16:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:42.767 16:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=109537 00:07:42.767 16:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:42.767 16:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 109537 /var/tmp/spdk.sock 00:07:42.768 16:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109537 ']' 00:07:42.768 16:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:42.768 16:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:42.768 16:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:42.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:42.768 16:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:42.768 16:13:32 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:42.768 [2024-11-19 16:13:32.915631] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:42.768 [2024-11-19 16:13:32.915734] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109537 ] 00:07:42.768 [2024-11-19 16:13:32.981765] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:42.768 [2024-11-19 16:13:32.981795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.768 [2024-11-19 16:13:33.026801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.768 [2024-11-19 16:13:33.026911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.768 [2024-11-19 16:13:33.026919] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=109667 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 109667 /var/tmp/spdk2.sock 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109667 ']' 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:43.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.027 16:13:33 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:43.027 [2024-11-19 16:13:33.343313] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:43.027 [2024-11-19 16:13:33.343418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109667 ] 00:07:43.286 [2024-11-19 16:13:33.450443] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:43.286 [2024-11-19 16:13:33.450479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:43.286 [2024-11-19 16:13:33.547164] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.286 [2024-11-19 16:13:33.547222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:43.286 [2024-11-19 16:13:33.547225] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.221 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.222 [2024-11-19 16:13:34.347170] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 109537 has claimed it. 00:07:44.222 request: 00:07:44.222 { 00:07:44.222 "method": "framework_enable_cpumask_locks", 00:07:44.222 "req_id": 1 00:07:44.222 } 00:07:44.222 Got JSON-RPC error response 00:07:44.222 response: 00:07:44.222 { 00:07:44.222 "code": -32603, 00:07:44.222 "message": "Failed to claim CPU core: 2" 00:07:44.222 } 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 109537 /var/tmp/spdk.sock 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109537 ']' 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.222 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.480 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.480 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:44.480 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 109667 /var/tmp/spdk2.sock 00:07:44.480 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 109667 ']' 00:07:44.480 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:44.480 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.480 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:44.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:44.480 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.480 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.739 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.739 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:44.739 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:44.739 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:44.739 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:44.739 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:44.739 00:07:44.739 real 0m2.029s 00:07:44.739 user 0m1.137s 00:07:44.739 sys 0m0.161s 00:07:44.739 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.739 16:13:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:44.739 ************************************ 00:07:44.739 END TEST locking_overlapped_coremask_via_rpc 00:07:44.739 ************************************ 00:07:44.739 16:13:34 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:44.739 16:13:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109537 ]] 00:07:44.739 16:13:34 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109537 00:07:44.739 16:13:34 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109537 ']' 00:07:44.739 16:13:34 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109537 00:07:44.739 16:13:34 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:44.739 16:13:34 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:44.739 16:13:34 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109537 00:07:44.739 16:13:34 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:44.739 16:13:34 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:44.739 16:13:34 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109537' 00:07:44.739 killing process with pid 109537 00:07:44.739 16:13:34 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 109537 00:07:44.739 16:13:34 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 109537 00:07:45.305 16:13:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109667 ]] 00:07:45.305 16:13:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109667 00:07:45.305 16:13:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109667 ']' 00:07:45.305 16:13:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109667 00:07:45.305 16:13:35 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:45.305 16:13:35 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.305 16:13:35 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 109667 00:07:45.305 16:13:35 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:45.305 16:13:35 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:45.305 16:13:35 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 109667' 00:07:45.305 killing process with pid 109667 00:07:45.305 16:13:35 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 109667 00:07:45.305 16:13:35 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 109667 00:07:45.565 16:13:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:45.565 16:13:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:45.565 16:13:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 109537 ]] 00:07:45.565 16:13:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 109537 00:07:45.565 16:13:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109537 ']' 00:07:45.565 16:13:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109537 00:07:45.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (109537) - No such process 00:07:45.565 16:13:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 109537 is not found' 00:07:45.565 Process with pid 109537 is not found 00:07:45.565 16:13:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 109667 ]] 00:07:45.565 16:13:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 109667 00:07:45.565 16:13:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 109667 ']' 00:07:45.565 16:13:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 109667 00:07:45.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (109667) - No such process 00:07:45.565 16:13:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 109667 is not found' 00:07:45.565 Process with pid 109667 is not found 00:07:45.565 16:13:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:45.565 00:07:45.565 real 0m15.148s 00:07:45.565 user 0m27.730s 00:07:45.565 sys 0m5.227s 00:07:45.565 16:13:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.565 16:13:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:45.565 ************************************ 00:07:45.565 END TEST cpu_locks 00:07:45.565 ************************************ 00:07:45.565 00:07:45.566 real 0m39.696s 00:07:45.566 user 1m18.262s 00:07:45.566 sys 0m9.353s 00:07:45.566 16:13:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.566 16:13:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:45.566 ************************************ 00:07:45.566 END TEST event 00:07:45.566 ************************************ 00:07:45.566 16:13:35 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:45.566 16:13:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.566 16:13:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.566 16:13:35 -- common/autotest_common.sh@10 -- # set +x 00:07:45.566 ************************************ 00:07:45.566 START TEST thread 00:07:45.566 ************************************ 00:07:45.566 16:13:35 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:45.566 * Looking for test storage... 00:07:45.824 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:45.825 16:13:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.825 16:13:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.825 16:13:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.825 16:13:35 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.825 16:13:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.825 16:13:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.825 16:13:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.825 16:13:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.825 16:13:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.825 16:13:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.825 16:13:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.825 16:13:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.825 16:13:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.825 16:13:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.825 16:13:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.825 16:13:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:45.825 16:13:35 thread -- scripts/common.sh@345 -- # : 1 00:07:45.825 16:13:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.825 16:13:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.825 16:13:35 thread -- scripts/common.sh@365 -- # decimal 1 00:07:45.825 16:13:35 thread -- scripts/common.sh@353 -- # local d=1 00:07:45.825 16:13:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.825 16:13:35 thread -- scripts/common.sh@355 -- # echo 1 00:07:45.825 16:13:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.825 16:13:35 thread -- scripts/common.sh@366 -- # decimal 2 00:07:45.825 16:13:35 thread -- scripts/common.sh@353 -- # local d=2 00:07:45.825 16:13:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.825 16:13:35 thread -- scripts/common.sh@355 -- # echo 2 00:07:45.825 16:13:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.825 16:13:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.825 16:13:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.825 16:13:35 thread -- scripts/common.sh@368 -- # return 0 00:07:45.825 16:13:35 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.825 16:13:35 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.825 --rc genhtml_branch_coverage=1 00:07:45.825 --rc genhtml_function_coverage=1 00:07:45.825 --rc genhtml_legend=1 00:07:45.825 --rc geninfo_all_blocks=1 00:07:45.825 --rc geninfo_unexecuted_blocks=1 00:07:45.825 00:07:45.825 ' 00:07:45.825 16:13:36 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.825 --rc genhtml_branch_coverage=1 00:07:45.825 --rc genhtml_function_coverage=1 00:07:45.825 --rc genhtml_legend=1 00:07:45.825 --rc geninfo_all_blocks=1 00:07:45.825 --rc geninfo_unexecuted_blocks=1 00:07:45.825 00:07:45.825 ' 00:07:45.825 16:13:36 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.825 --rc genhtml_branch_coverage=1 00:07:45.825 --rc genhtml_function_coverage=1 00:07:45.825 --rc genhtml_legend=1 00:07:45.825 --rc geninfo_all_blocks=1 00:07:45.825 --rc geninfo_unexecuted_blocks=1 00:07:45.825 00:07:45.825 ' 00:07:45.825 16:13:36 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.825 --rc genhtml_branch_coverage=1 00:07:45.825 --rc genhtml_function_coverage=1 00:07:45.825 --rc genhtml_legend=1 00:07:45.825 --rc geninfo_all_blocks=1 00:07:45.825 --rc geninfo_unexecuted_blocks=1 00:07:45.825 00:07:45.825 ' 00:07:45.825 16:13:36 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:45.825 16:13:36 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:45.825 16:13:36 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.825 16:13:36 thread -- common/autotest_common.sh@10 -- # set +x 00:07:45.825 ************************************ 00:07:45.825 START TEST thread_poller_perf 00:07:45.825 ************************************ 00:07:45.825 16:13:36 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:45.825 [2024-11-19 16:13:36.039150] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:45.825 [2024-11-19 16:13:36.039218] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110050 ] 00:07:45.825 [2024-11-19 16:13:36.104258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.825 [2024-11-19 16:13:36.148214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.825 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:47.200 [2024-11-19T15:13:37.539Z] ====================================== 00:07:47.200 [2024-11-19T15:13:37.539Z] busy:2712959883 (cyc) 00:07:47.200 [2024-11-19T15:13:37.539Z] total_run_count: 366000 00:07:47.200 [2024-11-19T15:13:37.539Z] tsc_hz: 2700000000 (cyc) 00:07:47.200 [2024-11-19T15:13:37.539Z] ====================================== 00:07:47.200 [2024-11-19T15:13:37.539Z] poller_cost: 7412 (cyc), 2745 (nsec) 00:07:47.200 00:07:47.200 real 0m1.174s 00:07:47.200 user 0m1.108s 00:07:47.200 sys 0m0.061s 00:07:47.200 16:13:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:47.200 16:13:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:47.200 ************************************ 00:07:47.200 END TEST thread_poller_perf 00:07:47.200 ************************************ 00:07:47.200 16:13:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:47.200 16:13:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:47.200 16:13:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:47.200 16:13:37 thread -- common/autotest_common.sh@10 -- # set +x 00:07:47.200 ************************************ 00:07:47.200 START TEST thread_poller_perf 00:07:47.200 ************************************ 00:07:47.200 16:13:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:47.200 [2024-11-19 16:13:37.261523] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:47.200 [2024-11-19 16:13:37.261587] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110203 ] 00:07:47.200 [2024-11-19 16:13:37.324960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.200 [2024-11-19 16:13:37.369750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.200 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:48.135 [2024-11-19T15:13:38.474Z] ====================================== 00:07:48.135 [2024-11-19T15:13:38.474Z] busy:2701922832 (cyc) 00:07:48.135 [2024-11-19T15:13:38.474Z] total_run_count: 4787000 00:07:48.135 [2024-11-19T15:13:38.474Z] tsc_hz: 2700000000 (cyc) 00:07:48.135 [2024-11-19T15:13:38.474Z] ====================================== 00:07:48.135 [2024-11-19T15:13:38.474Z] poller_cost: 564 (cyc), 208 (nsec) 00:07:48.135 00:07:48.135 real 0m1.166s 00:07:48.135 user 0m1.098s 00:07:48.135 sys 0m0.063s 00:07:48.135 16:13:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.135 16:13:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:48.135 ************************************ 00:07:48.135 END TEST thread_poller_perf 00:07:48.135 ************************************ 00:07:48.135 16:13:38 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:48.135 00:07:48.135 real 0m2.586s 00:07:48.135 user 0m2.351s 00:07:48.135 sys 0m0.240s 00:07:48.135 16:13:38 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:48.135 16:13:38 thread -- common/autotest_common.sh@10 -- # set +x 00:07:48.135 ************************************ 00:07:48.135 END TEST thread 00:07:48.135 ************************************ 00:07:48.135 16:13:38 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:48.135 16:13:38 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:48.135 16:13:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:48.135 16:13:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:48.135 16:13:38 -- common/autotest_common.sh@10 -- # set +x 00:07:48.392 ************************************ 00:07:48.392 START TEST app_cmdline 00:07:48.392 ************************************ 00:07:48.392 16:13:38 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:48.392 * Looking for test storage... 00:07:48.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:48.392 16:13:38 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:48.392 16:13:38 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:48.392 16:13:38 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:48.392 16:13:38 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:48.392 16:13:38 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:48.392 16:13:38 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:48.392 16:13:38 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:48.392 16:13:38 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:48.392 16:13:38 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:48.392 16:13:38 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:48.392 16:13:38 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:48.392 16:13:38 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:48.392 16:13:38 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:48.392 16:13:38 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:48.393 16:13:38 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:48.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.393 --rc genhtml_branch_coverage=1 00:07:48.393 --rc genhtml_function_coverage=1 00:07:48.393 --rc genhtml_legend=1 00:07:48.393 --rc geninfo_all_blocks=1 00:07:48.393 --rc geninfo_unexecuted_blocks=1 00:07:48.393 00:07:48.393 ' 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:48.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.393 --rc genhtml_branch_coverage=1 00:07:48.393 --rc genhtml_function_coverage=1 00:07:48.393 --rc genhtml_legend=1 00:07:48.393 --rc geninfo_all_blocks=1 00:07:48.393 --rc geninfo_unexecuted_blocks=1 00:07:48.393 00:07:48.393 ' 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:48.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.393 --rc genhtml_branch_coverage=1 00:07:48.393 --rc genhtml_function_coverage=1 00:07:48.393 --rc genhtml_legend=1 00:07:48.393 --rc geninfo_all_blocks=1 00:07:48.393 --rc geninfo_unexecuted_blocks=1 00:07:48.393 00:07:48.393 ' 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:48.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:48.393 --rc genhtml_branch_coverage=1 00:07:48.393 --rc genhtml_function_coverage=1 00:07:48.393 --rc genhtml_legend=1 00:07:48.393 --rc geninfo_all_blocks=1 00:07:48.393 --rc geninfo_unexecuted_blocks=1 00:07:48.393 00:07:48.393 ' 00:07:48.393 16:13:38 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:48.393 16:13:38 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=110404 00:07:48.393 16:13:38 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:48.393 16:13:38 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 110404 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 110404 ']' 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.393 16:13:38 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:48.393 [2024-11-19 16:13:38.678209] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:48.393 [2024-11-19 16:13:38.678308] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110404 ] 00:07:48.651 [2024-11-19 16:13:38.744765] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.651 [2024-11-19 16:13:38.791967] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.909 16:13:39 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.909 16:13:39 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:48.909 16:13:39 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:49.167 { 00:07:49.167 "version": "SPDK v25.01-pre git sha1 dcc2ca8f3", 00:07:49.167 "fields": { 00:07:49.167 "major": 25, 00:07:49.167 "minor": 1, 00:07:49.167 "patch": 0, 00:07:49.167 "suffix": "-pre", 00:07:49.167 "commit": "dcc2ca8f3" 00:07:49.167 } 00:07:49.167 } 00:07:49.167 16:13:39 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:49.167 16:13:39 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:49.167 16:13:39 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:49.167 16:13:39 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:49.167 16:13:39 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:49.167 16:13:39 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:49.167 16:13:39 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:49.167 16:13:39 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:49.167 16:13:39 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:49.167 16:13:39 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.167 16:13:39 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:49.168 16:13:39 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:49.168 16:13:39 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:49.168 16:13:39 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:49.426 request: 00:07:49.426 { 00:07:49.426 "method": "env_dpdk_get_mem_stats", 00:07:49.426 "req_id": 1 00:07:49.426 } 00:07:49.426 Got JSON-RPC error response 00:07:49.426 response: 00:07:49.426 { 00:07:49.426 "code": -32601, 00:07:49.426 "message": "Method not found" 00:07:49.426 } 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:49.426 16:13:39 app_cmdline -- app/cmdline.sh@1 -- # killprocess 110404 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 110404 ']' 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 110404 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110404 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110404' 00:07:49.426 killing process with pid 110404 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@973 -- # kill 110404 00:07:49.426 16:13:39 app_cmdline -- common/autotest_common.sh@978 -- # wait 110404 00:07:49.684 00:07:49.684 real 0m1.529s 00:07:49.684 user 0m1.894s 00:07:49.684 sys 0m0.491s 00:07:49.685 16:13:40 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.685 16:13:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:49.685 ************************************ 00:07:49.685 END TEST app_cmdline 00:07:49.685 ************************************ 00:07:49.944 16:13:40 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:49.944 16:13:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.944 16:13:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.944 16:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:49.944 ************************************ 00:07:49.944 START TEST version 00:07:49.944 ************************************ 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:49.945 * Looking for test storage... 00:07:49.945 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:49.945 16:13:40 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.945 16:13:40 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.945 16:13:40 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.945 16:13:40 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.945 16:13:40 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.945 16:13:40 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.945 16:13:40 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.945 16:13:40 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.945 16:13:40 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.945 16:13:40 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.945 16:13:40 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.945 16:13:40 version -- scripts/common.sh@344 -- # case "$op" in 00:07:49.945 16:13:40 version -- scripts/common.sh@345 -- # : 1 00:07:49.945 16:13:40 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.945 16:13:40 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.945 16:13:40 version -- scripts/common.sh@365 -- # decimal 1 00:07:49.945 16:13:40 version -- scripts/common.sh@353 -- # local d=1 00:07:49.945 16:13:40 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.945 16:13:40 version -- scripts/common.sh@355 -- # echo 1 00:07:49.945 16:13:40 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.945 16:13:40 version -- scripts/common.sh@366 -- # decimal 2 00:07:49.945 16:13:40 version -- scripts/common.sh@353 -- # local d=2 00:07:49.945 16:13:40 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.945 16:13:40 version -- scripts/common.sh@355 -- # echo 2 00:07:49.945 16:13:40 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.945 16:13:40 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.945 16:13:40 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.945 16:13:40 version -- scripts/common.sh@368 -- # return 0 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:49.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.945 --rc genhtml_branch_coverage=1 00:07:49.945 --rc genhtml_function_coverage=1 00:07:49.945 --rc genhtml_legend=1 00:07:49.945 --rc geninfo_all_blocks=1 00:07:49.945 --rc geninfo_unexecuted_blocks=1 00:07:49.945 00:07:49.945 ' 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:49.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.945 --rc genhtml_branch_coverage=1 00:07:49.945 --rc genhtml_function_coverage=1 00:07:49.945 --rc genhtml_legend=1 00:07:49.945 --rc geninfo_all_blocks=1 00:07:49.945 --rc geninfo_unexecuted_blocks=1 00:07:49.945 00:07:49.945 ' 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:49.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.945 --rc genhtml_branch_coverage=1 00:07:49.945 --rc genhtml_function_coverage=1 00:07:49.945 --rc genhtml_legend=1 00:07:49.945 --rc geninfo_all_blocks=1 00:07:49.945 --rc geninfo_unexecuted_blocks=1 00:07:49.945 00:07:49.945 ' 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:49.945 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.945 --rc genhtml_branch_coverage=1 00:07:49.945 --rc genhtml_function_coverage=1 00:07:49.945 --rc genhtml_legend=1 00:07:49.945 --rc geninfo_all_blocks=1 00:07:49.945 --rc geninfo_unexecuted_blocks=1 00:07:49.945 00:07:49.945 ' 00:07:49.945 16:13:40 version -- app/version.sh@17 -- # get_header_version major 00:07:49.945 16:13:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:49.945 16:13:40 version -- app/version.sh@14 -- # cut -f2 00:07:49.945 16:13:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:49.945 16:13:40 version -- app/version.sh@17 -- # major=25 00:07:49.945 16:13:40 version -- app/version.sh@18 -- # get_header_version minor 00:07:49.945 16:13:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:49.945 16:13:40 version -- app/version.sh@14 -- # cut -f2 00:07:49.945 16:13:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:49.945 16:13:40 version -- app/version.sh@18 -- # minor=1 00:07:49.945 16:13:40 version -- app/version.sh@19 -- # get_header_version patch 00:07:49.945 16:13:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:49.945 16:13:40 version -- app/version.sh@14 -- # cut -f2 00:07:49.945 16:13:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:49.945 16:13:40 version -- app/version.sh@19 -- # patch=0 00:07:49.945 16:13:40 version -- app/version.sh@20 -- # get_header_version suffix 00:07:49.945 16:13:40 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:49.945 16:13:40 version -- app/version.sh@14 -- # cut -f2 00:07:49.945 16:13:40 version -- app/version.sh@14 -- # tr -d '"' 00:07:49.945 16:13:40 version -- app/version.sh@20 -- # suffix=-pre 00:07:49.945 16:13:40 version -- app/version.sh@22 -- # version=25.1 00:07:49.945 16:13:40 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:49.945 16:13:40 version -- app/version.sh@28 -- # version=25.1rc0 00:07:49.945 16:13:40 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:49.945 16:13:40 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:49.945 16:13:40 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:49.945 16:13:40 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:49.945 00:07:49.945 real 0m0.183s 00:07:49.945 user 0m0.130s 00:07:49.945 sys 0m0.077s 00:07:49.945 16:13:40 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.945 16:13:40 version -- common/autotest_common.sh@10 -- # set +x 00:07:49.945 ************************************ 00:07:49.945 END TEST version 00:07:49.945 ************************************ 00:07:49.945 16:13:40 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:49.945 16:13:40 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:49.945 16:13:40 -- spdk/autotest.sh@194 -- # uname -s 00:07:49.945 16:13:40 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:49.945 16:13:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:49.945 16:13:40 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:49.945 16:13:40 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:49.945 16:13:40 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:49.945 16:13:40 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:49.945 16:13:40 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:49.945 16:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:50.205 16:13:40 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:50.205 16:13:40 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:50.205 16:13:40 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:50.205 16:13:40 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:50.205 16:13:40 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:50.205 16:13:40 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:50.205 16:13:40 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:50.205 16:13:40 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.205 16:13:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.205 16:13:40 -- common/autotest_common.sh@10 -- # set +x 00:07:50.205 ************************************ 00:07:50.205 START TEST nvmf_tcp 00:07:50.205 ************************************ 00:07:50.205 16:13:40 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:50.205 * Looking for test storage... 00:07:50.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:50.205 16:13:40 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.205 16:13:40 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.205 16:13:40 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.205 16:13:40 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.205 16:13:40 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:50.205 16:13:40 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.205 16:13:40 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.205 --rc genhtml_branch_coverage=1 00:07:50.205 --rc genhtml_function_coverage=1 00:07:50.205 --rc genhtml_legend=1 00:07:50.205 --rc geninfo_all_blocks=1 00:07:50.205 --rc geninfo_unexecuted_blocks=1 00:07:50.205 00:07:50.205 ' 00:07:50.205 16:13:40 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.205 --rc genhtml_branch_coverage=1 00:07:50.206 --rc genhtml_function_coverage=1 00:07:50.206 --rc genhtml_legend=1 00:07:50.206 --rc geninfo_all_blocks=1 00:07:50.206 --rc geninfo_unexecuted_blocks=1 00:07:50.206 00:07:50.206 ' 00:07:50.206 16:13:40 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.206 --rc genhtml_branch_coverage=1 00:07:50.206 --rc genhtml_function_coverage=1 00:07:50.206 --rc genhtml_legend=1 00:07:50.206 --rc geninfo_all_blocks=1 00:07:50.206 --rc geninfo_unexecuted_blocks=1 00:07:50.206 00:07:50.206 ' 00:07:50.206 16:13:40 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.206 --rc genhtml_branch_coverage=1 00:07:50.206 --rc genhtml_function_coverage=1 00:07:50.206 --rc genhtml_legend=1 00:07:50.206 --rc geninfo_all_blocks=1 00:07:50.206 --rc geninfo_unexecuted_blocks=1 00:07:50.206 00:07:50.206 ' 00:07:50.206 16:13:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:50.206 16:13:40 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:50.206 16:13:40 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:50.206 16:13:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.206 16:13:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.206 16:13:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:50.206 ************************************ 00:07:50.206 START TEST nvmf_target_core 00:07:50.206 ************************************ 00:07:50.206 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:50.465 * Looking for test storage... 00:07:50.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:50.465 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.465 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.466 --rc genhtml_branch_coverage=1 00:07:50.466 --rc genhtml_function_coverage=1 00:07:50.466 --rc genhtml_legend=1 00:07:50.466 --rc geninfo_all_blocks=1 00:07:50.466 --rc geninfo_unexecuted_blocks=1 00:07:50.466 00:07:50.466 ' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.466 --rc genhtml_branch_coverage=1 00:07:50.466 --rc genhtml_function_coverage=1 00:07:50.466 --rc genhtml_legend=1 00:07:50.466 --rc geninfo_all_blocks=1 00:07:50.466 --rc geninfo_unexecuted_blocks=1 00:07:50.466 00:07:50.466 ' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.466 --rc genhtml_branch_coverage=1 00:07:50.466 --rc genhtml_function_coverage=1 00:07:50.466 --rc genhtml_legend=1 00:07:50.466 --rc geninfo_all_blocks=1 00:07:50.466 --rc geninfo_unexecuted_blocks=1 00:07:50.466 00:07:50.466 ' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.466 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.466 --rc genhtml_branch_coverage=1 00:07:50.466 --rc genhtml_function_coverage=1 00:07:50.466 --rc genhtml_legend=1 00:07:50.466 --rc geninfo_all_blocks=1 00:07:50.466 --rc geninfo_unexecuted_blocks=1 00:07:50.466 00:07:50.466 ' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.466 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.466 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.467 ************************************ 00:07:50.467 START TEST nvmf_abort 00:07:50.467 ************************************ 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:50.467 * Looking for test storage... 00:07:50.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:50.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.467 --rc genhtml_branch_coverage=1 00:07:50.467 --rc genhtml_function_coverage=1 00:07:50.467 --rc genhtml_legend=1 00:07:50.467 --rc geninfo_all_blocks=1 00:07:50.467 --rc geninfo_unexecuted_blocks=1 00:07:50.467 00:07:50.467 ' 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:50.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.467 --rc genhtml_branch_coverage=1 00:07:50.467 --rc genhtml_function_coverage=1 00:07:50.467 --rc genhtml_legend=1 00:07:50.467 --rc geninfo_all_blocks=1 00:07:50.467 --rc geninfo_unexecuted_blocks=1 00:07:50.467 00:07:50.467 ' 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:50.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.467 --rc genhtml_branch_coverage=1 00:07:50.467 --rc genhtml_function_coverage=1 00:07:50.467 --rc genhtml_legend=1 00:07:50.467 --rc geninfo_all_blocks=1 00:07:50.467 --rc geninfo_unexecuted_blocks=1 00:07:50.467 00:07:50.467 ' 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:50.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:50.467 --rc genhtml_branch_coverage=1 00:07:50.467 --rc genhtml_function_coverage=1 00:07:50.467 --rc genhtml_legend=1 00:07:50.467 --rc geninfo_all_blocks=1 00:07:50.467 --rc geninfo_unexecuted_blocks=1 00:07:50.467 00:07:50.467 ' 00:07:50.467 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:50.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:07:50.727 16:13:40 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.267 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:53.268 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:53.268 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:53.268 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:53.268 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:53.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:07:53.268 00:07:53.268 --- 10.0.0.2 ping statistics --- 00:07:53.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.268 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:53.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:07:53.268 00:07:53.268 --- 10.0.0.1 ping statistics --- 00:07:53.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.268 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=112495 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 112495 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 112495 ']' 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.268 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.268 [2024-11-19 16:13:43.290438] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:07:53.268 [2024-11-19 16:13:43.290525] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.269 [2024-11-19 16:13:43.363710] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.269 [2024-11-19 16:13:43.414436] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.269 [2024-11-19 16:13:43.414497] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.269 [2024-11-19 16:13:43.414510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.269 [2024-11-19 16:13:43.414521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.269 [2024-11-19 16:13:43.414530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.269 [2024-11-19 16:13:43.415953] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.269 [2024-11-19 16:13:43.416016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.269 [2024-11-19 16:13:43.416019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.269 [2024-11-19 16:13:43.557442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.269 Malloc0 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.269 Delay0 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.269 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.528 [2024-11-19 16:13:43.622879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.528 16:13:43 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:53.528 [2024-11-19 16:13:43.727933] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:56.063 Initializing NVMe Controllers 00:07:56.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:56.063 controller IO queue size 128 less than required 00:07:56.063 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:56.064 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:56.064 Initialization complete. Launching workers. 00:07:56.064 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 28693 00:07:56.064 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28758, failed to submit 62 00:07:56.064 success 28697, unsuccessful 61, failed 0 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:56.064 rmmod nvme_tcp 00:07:56.064 rmmod nvme_fabrics 00:07:56.064 rmmod nvme_keyring 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 112495 ']' 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 112495 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 112495 ']' 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 112495 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112495 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112495' 00:07:56.064 killing process with pid 112495 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 112495 00:07:56.064 16:13:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 112495 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.064 16:13:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:57.979 00:07:57.979 real 0m7.514s 00:07:57.979 user 0m10.830s 00:07:57.979 sys 0m2.417s 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.979 ************************************ 00:07:57.979 END TEST nvmf_abort 00:07:57.979 ************************************ 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.979 ************************************ 00:07:57.979 START TEST nvmf_ns_hotplug_stress 00:07:57.979 ************************************ 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:57.979 * Looking for test storage... 00:07:57.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.979 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:58.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.239 --rc genhtml_branch_coverage=1 00:07:58.239 --rc genhtml_function_coverage=1 00:07:58.239 --rc genhtml_legend=1 00:07:58.239 --rc geninfo_all_blocks=1 00:07:58.239 --rc geninfo_unexecuted_blocks=1 00:07:58.239 00:07:58.239 ' 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:58.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.239 --rc genhtml_branch_coverage=1 00:07:58.239 --rc genhtml_function_coverage=1 00:07:58.239 --rc genhtml_legend=1 00:07:58.239 --rc geninfo_all_blocks=1 00:07:58.239 --rc geninfo_unexecuted_blocks=1 00:07:58.239 00:07:58.239 ' 00:07:58.239 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:58.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.239 --rc genhtml_branch_coverage=1 00:07:58.239 --rc genhtml_function_coverage=1 00:07:58.239 --rc genhtml_legend=1 00:07:58.240 --rc geninfo_all_blocks=1 00:07:58.240 --rc geninfo_unexecuted_blocks=1 00:07:58.240 00:07:58.240 ' 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:58.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.240 --rc genhtml_branch_coverage=1 00:07:58.240 --rc genhtml_function_coverage=1 00:07:58.240 --rc genhtml_legend=1 00:07:58.240 --rc geninfo_all_blocks=1 00:07:58.240 --rc geninfo_unexecuted_blocks=1 00:07:58.240 00:07:58.240 ' 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:58.240 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:07:58.240 16:13:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:00.780 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:00.780 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:00.780 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:00.780 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:00.780 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:00.780 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:08:00.781 00:08:00.781 --- 10.0.0.2 ping statistics --- 00:08:00.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.781 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:08:00.781 00:08:00.781 --- 10.0.0.1 ping statistics --- 00:08:00.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.781 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=114854 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 114854 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 114854 ']' 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:00.781 16:13:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:00.781 [2024-11-19 16:13:50.835999] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:08:00.781 [2024-11-19 16:13:50.836106] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.781 [2024-11-19 16:13:50.910992] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:00.781 [2024-11-19 16:13:50.955458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.781 [2024-11-19 16:13:50.955524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.781 [2024-11-19 16:13:50.955548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.781 [2024-11-19 16:13:50.955559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.781 [2024-11-19 16:13:50.955569] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.781 [2024-11-19 16:13:50.957003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:00.781 [2024-11-19 16:13:50.957133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:00.781 [2024-11-19 16:13:50.957138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:00.781 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:00.781 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:00.781 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:00.781 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:00.781 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:00.781 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:00.781 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:00.781 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:01.039 [2024-11-19 16:13:51.359224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.298 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:01.557 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.815 [2024-11-19 16:13:51.898433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.815 16:13:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:02.072 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:02.331 Malloc0 00:08:02.331 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:02.590 Delay0 00:08:02.590 16:13:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:02.848 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:03.106 NULL1 00:08:03.106 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:03.364 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=115169 00:08:03.364 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:03.364 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:03.364 16:13:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.739 Read completed with error (sct=0, sc=11) 00:08:04.739 16:13:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:04.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.739 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:04.739 16:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:04.739 16:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:04.997 true 00:08:04.997 16:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:04.997 16:13:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.930 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.189 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:06.189 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:06.447 true 00:08:06.447 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:06.447 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.705 16:13:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:06.963 16:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:06.963 16:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:07.221 true 00:08:07.221 16:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:07.221 16:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.479 16:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.737 16:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:07.737 16:13:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:07.995 true 00:08:07.995 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:07.995 16:13:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.928 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.186 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:09.186 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:09.443 true 00:08:09.443 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:09.443 16:13:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.701 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.959 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:09.959 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:10.217 true 00:08:10.217 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:10.217 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.782 16:14:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.782 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:10.782 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:11.039 true 00:08:11.039 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:11.039 16:14:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.414 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.414 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:12.414 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:12.673 true 00:08:12.673 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:12.673 16:14:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:12.931 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.189 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:13.189 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:13.447 true 00:08:13.447 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:13.447 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.705 16:14:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.964 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:13.964 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:14.222 true 00:08:14.222 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:14.222 16:14:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:15.156 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:15.156 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:15.415 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:15.415 16:14:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:15.673 true 00:08:15.673 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:15.673 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.239 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.239 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:16.239 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:16.498 true 00:08:16.498 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:16.498 16:14:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.756 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.014 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:17.014 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:17.272 true 00:08:17.272 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:17.272 16:14:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.647 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.647 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:18.647 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:18.647 16:14:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:18.906 true 00:08:18.906 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:18.906 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.163 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.422 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:19.422 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:19.679 true 00:08:19.679 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:19.679 16:14:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.636 16:14:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.892 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:20.892 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:21.150 true 00:08:21.150 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:21.150 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.408 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.666 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:21.666 16:14:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:21.924 true 00:08:21.924 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:21.924 16:14:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:22.860 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:22.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:22.860 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.118 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:23.118 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:23.377 true 00:08:23.377 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:23.377 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.636 16:14:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.894 16:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:23.894 16:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:24.153 true 00:08:24.153 16:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:24.153 16:14:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.087 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:25.087 16:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:25.345 16:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:25.345 16:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:25.603 true 00:08:25.603 16:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:25.603 16:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:25.861 16:14:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.119 16:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:26.119 16:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:26.377 true 00:08:26.377 16:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:26.377 16:14:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.311 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.311 16:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.311 16:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:27.311 16:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:27.570 true 00:08:27.570 16:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:27.570 16:14:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.828 16:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.086 16:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:28.086 16:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:28.344 true 00:08:28.344 16:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:28.344 16:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.601 16:14:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.859 16:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:28.859 16:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:29.117 true 00:08:29.117 16:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:29.117 16:14:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:30.492 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:30.492 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:30.492 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:30.749 true 00:08:30.749 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:30.749 16:14:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.007 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.266 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:31.266 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:31.524 true 00:08:31.524 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:31.524 16:14:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:32.091 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:32.091 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:32.091 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:32.349 true 00:08:32.349 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:32.349 16:14:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.724 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.724 Initializing NVMe Controllers 00:08:33.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:33.724 Controller IO queue size 128, less than required. 00:08:33.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.724 Controller IO queue size 128, less than required. 00:08:33.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:33.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:33.724 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:33.724 Initialization complete. Launching workers. 00:08:33.724 ======================================================== 00:08:33.724 Latency(us) 00:08:33.724 Device Information : IOPS MiB/s Average min max 00:08:33.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 636.19 0.31 90154.03 3038.83 1039687.17 00:08:33.724 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9210.20 4.50 13897.26 2208.07 408673.46 00:08:33.724 ======================================================== 00:08:33.724 Total : 9846.40 4.81 18824.36 2208.07 1039687.17 00:08:33.724 00:08:33.724 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:33.724 16:14:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:33.982 true 00:08:33.982 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 115169 00:08:33.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (115169) - No such process 00:08:33.982 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 115169 00:08:33.982 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.241 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:34.499 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:34.499 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:34.499 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:34.499 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.499 16:14:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:34.758 null0 00:08:34.758 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:34.758 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:34.758 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:35.017 null1 00:08:35.017 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.017 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.017 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:35.275 null2 00:08:35.275 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.275 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.275 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:35.533 null3 00:08:35.533 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.533 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.533 16:14:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:35.792 null4 00:08:35.792 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:35.792 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:35.792 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:36.050 null5 00:08:36.050 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:36.050 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:36.050 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:36.309 null6 00:08:36.309 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:36.309 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:36.309 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:36.568 null7 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:36.568 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.569 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:36.828 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:36.829 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 119222 119223 119225 119227 119229 119231 119233 119235 00:08:36.829 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:36.829 16:14:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.087 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.087 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.087 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.087 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.087 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.087 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.087 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.087 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.345 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:37.603 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.603 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:37.603 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:37.603 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:37.603 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:37.603 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:37.603 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:37.603 16:14:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:37.863 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.123 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.123 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.123 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.123 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.123 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.123 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.123 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.123 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.382 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:38.641 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:38.641 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:38.641 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:38.900 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.900 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:38.900 16:14:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.900 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:38.900 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:38.900 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:38.900 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:38.900 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.159 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.417 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.417 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.417 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.417 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.417 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.417 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:39.417 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.417 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.676 16:14:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.933 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:39.934 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:39.934 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:39.934 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:39.934 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:39.934 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:39.934 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:39.934 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.193 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:40.761 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.761 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.761 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.761 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.761 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.761 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.761 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.761 16:14:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.020 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:41.279 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:41.279 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:41.279 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:41.279 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:41.279 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:41.279 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:41.279 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.279 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.538 16:14:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:41.797 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:41.797 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:41.797 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:41.797 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:41.797 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:41.797 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:41.797 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.797 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.056 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:42.315 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:42.315 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:42.315 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:42.315 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:42.315 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:42.315 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:42.315 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.573 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:42.832 16:14:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:42.832 rmmod nvme_tcp 00:08:42.832 rmmod nvme_fabrics 00:08:42.832 rmmod nvme_keyring 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 114854 ']' 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 114854 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 114854 ']' 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 114854 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 114854 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 114854' 00:08:42.832 killing process with pid 114854 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 114854 00:08:42.832 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 114854 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:43.093 16:14:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.005 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:45.005 00:08:45.005 real 0m47.083s 00:08:45.005 user 3m38.617s 00:08:45.005 sys 0m15.665s 00:08:45.005 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:45.005 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:45.005 ************************************ 00:08:45.005 END TEST nvmf_ns_hotplug_stress 00:08:45.005 ************************************ 00:08:45.005 16:14:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:45.265 ************************************ 00:08:45.265 START TEST nvmf_delete_subsystem 00:08:45.265 ************************************ 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:45.265 * Looking for test storage... 00:08:45.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:45.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.265 --rc genhtml_branch_coverage=1 00:08:45.265 --rc genhtml_function_coverage=1 00:08:45.265 --rc genhtml_legend=1 00:08:45.265 --rc geninfo_all_blocks=1 00:08:45.265 --rc geninfo_unexecuted_blocks=1 00:08:45.265 00:08:45.265 ' 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:45.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.265 --rc genhtml_branch_coverage=1 00:08:45.265 --rc genhtml_function_coverage=1 00:08:45.265 --rc genhtml_legend=1 00:08:45.265 --rc geninfo_all_blocks=1 00:08:45.265 --rc geninfo_unexecuted_blocks=1 00:08:45.265 00:08:45.265 ' 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:45.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.265 --rc genhtml_branch_coverage=1 00:08:45.265 --rc genhtml_function_coverage=1 00:08:45.265 --rc genhtml_legend=1 00:08:45.265 --rc geninfo_all_blocks=1 00:08:45.265 --rc geninfo_unexecuted_blocks=1 00:08:45.265 00:08:45.265 ' 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:45.265 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:45.265 --rc genhtml_branch_coverage=1 00:08:45.265 --rc genhtml_function_coverage=1 00:08:45.265 --rc genhtml_legend=1 00:08:45.265 --rc geninfo_all_blocks=1 00:08:45.265 --rc geninfo_unexecuted_blocks=1 00:08:45.265 00:08:45.265 ' 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.265 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:45.266 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:08:45.266 16:14:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:47.801 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:47.801 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:47.801 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:47.801 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:47.801 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:47.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:47.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:08:47.802 00:08:47.802 --- 10.0.0.2 ping statistics --- 00:08:47.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.802 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:47.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:47.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:08:47.802 00:08:47.802 --- 10.0.0.1 ping statistics --- 00:08:47.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:47.802 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=122126 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 122126 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 122126 ']' 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:47.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:47.802 16:14:37 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:47.802 [2024-11-19 16:14:37.930402] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:08:47.802 [2024-11-19 16:14:37.930507] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:47.802 [2024-11-19 16:14:38.000237] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:47.802 [2024-11-19 16:14:38.041898] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:47.802 [2024-11-19 16:14:38.041957] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:47.802 [2024-11-19 16:14:38.041980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:47.802 [2024-11-19 16:14:38.041990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:47.802 [2024-11-19 16:14:38.041999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:47.802 [2024-11-19 16:14:38.043392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:47.802 [2024-11-19 16:14:38.043398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.062 [2024-11-19 16:14:38.185902] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.062 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.063 [2024-11-19 16:14:38.202183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.063 NULL1 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.063 Delay0 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=122152 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:48.063 16:14:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:48.063 [2024-11-19 16:14:38.286939] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:49.999 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:49.999 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.999 16:14:40 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Write completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 Read completed with error (sct=0, sc=8) 00:08:50.260 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Read completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 Write completed with error (sct=0, sc=8) 00:08:50.261 starting I/O failed: -6 00:08:50.261 starting I/O failed: -6 00:08:50.261 starting I/O failed: -6 00:08:50.261 [2024-11-19 16:14:40.412148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3014000c40 is same with the state(6) to be set 00:08:50.261 starting I/O failed: -6 00:08:50.261 starting I/O failed: -6 00:08:50.261 starting I/O failed: -6 00:08:50.261 starting I/O failed: -6 00:08:51.203 [2024-11-19 16:14:41.382036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x50c5b0 is same with the state(6) to be set 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 [2024-11-19 16:14:41.409758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4feb40 is same with the state(6) to be set 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Write completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.203 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 [2024-11-19 16:14:41.412817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f301400d7e0 is same with the state(6) to be set 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 [2024-11-19 16:14:41.413102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f301400d020 is same with the state(6) to be set 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Read completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 Write completed with error (sct=0, sc=8) 00:08:51.204 [2024-11-19 16:14:41.413365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x4fe3f0 is same with the state(6) to be set 00:08:51.204 Initializing NVMe Controllers 00:08:51.204 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:51.204 Controller IO queue size 128, less than required. 00:08:51.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:51.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:51.204 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:51.204 Initialization complete. Launching workers. 00:08:51.204 ======================================================== 00:08:51.204 Latency(us) 00:08:51.204 Device Information : IOPS MiB/s Average min max 00:08:51.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 183.97 0.09 913736.23 774.56 1013956.90 00:08:51.204 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 187.45 0.09 902365.99 797.06 1014827.34 00:08:51.204 ======================================================== 00:08:51.204 Total : 371.42 0.18 907997.98 774.56 1014827.34 00:08:51.204 00:08:51.204 [2024-11-19 16:14:41.414508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x50c5b0 (9): Bad file descriptor 00:08:51.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:51.204 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.204 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:51.204 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122152 00:08:51.204 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 122152 00:08:51.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (122152) - No such process 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 122152 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 122152 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 122152 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.774 [2024-11-19 16:14:41.938331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=122677 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122677 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:51.774 16:14:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:51.774 [2024-11-19 16:14:42.008893] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:52.344 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:52.345 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122677 00:08:52.345 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:52.914 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:52.914 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122677 00:08:52.914 16:14:42 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.174 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:53.174 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122677 00:08:53.174 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:53.746 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:53.746 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122677 00:08:53.746 16:14:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.318 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.318 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122677 00:08:54.318 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:54.888 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:54.888 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122677 00:08:54.888 16:14:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:55.147 Initializing NVMe Controllers 00:08:55.147 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:55.147 Controller IO queue size 128, less than required. 00:08:55.147 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:55.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:55.147 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:55.147 Initialization complete. Launching workers. 00:08:55.147 ======================================================== 00:08:55.147 Latency(us) 00:08:55.147 Device Information : IOPS MiB/s Average min max 00:08:55.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004376.76 1000163.72 1042517.95 00:08:55.147 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004972.49 1000156.83 1041337.41 00:08:55.147 ======================================================== 00:08:55.147 Total : 256.00 0.12 1004674.62 1000156.83 1042517.95 00:08:55.147 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 122677 00:08:55.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (122677) - No such process 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 122677 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:55.147 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:55.147 rmmod nvme_tcp 00:08:55.409 rmmod nvme_fabrics 00:08:55.409 rmmod nvme_keyring 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 122126 ']' 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 122126 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 122126 ']' 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 122126 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122126 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122126' 00:08:55.409 killing process with pid 122126 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 122126 00:08:55.409 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 122126 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.670 16:14:45 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:57.580 00:08:57.580 real 0m12.429s 00:08:57.580 user 0m27.942s 00:08:57.580 sys 0m3.077s 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:57.580 ************************************ 00:08:57.580 END TEST nvmf_delete_subsystem 00:08:57.580 ************************************ 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:57.580 ************************************ 00:08:57.580 START TEST nvmf_host_management 00:08:57.580 ************************************ 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:57.580 * Looking for test storage... 00:08:57.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:08:57.580 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:57.840 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:57.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.841 --rc genhtml_branch_coverage=1 00:08:57.841 --rc genhtml_function_coverage=1 00:08:57.841 --rc genhtml_legend=1 00:08:57.841 --rc geninfo_all_blocks=1 00:08:57.841 --rc geninfo_unexecuted_blocks=1 00:08:57.841 00:08:57.841 ' 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:57.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.841 --rc genhtml_branch_coverage=1 00:08:57.841 --rc genhtml_function_coverage=1 00:08:57.841 --rc genhtml_legend=1 00:08:57.841 --rc geninfo_all_blocks=1 00:08:57.841 --rc geninfo_unexecuted_blocks=1 00:08:57.841 00:08:57.841 ' 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:57.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.841 --rc genhtml_branch_coverage=1 00:08:57.841 --rc genhtml_function_coverage=1 00:08:57.841 --rc genhtml_legend=1 00:08:57.841 --rc geninfo_all_blocks=1 00:08:57.841 --rc geninfo_unexecuted_blocks=1 00:08:57.841 00:08:57.841 ' 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:57.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:57.841 --rc genhtml_branch_coverage=1 00:08:57.841 --rc genhtml_function_coverage=1 00:08:57.841 --rc genhtml_legend=1 00:08:57.841 --rc geninfo_all_blocks=1 00:08:57.841 --rc geninfo_unexecuted_blocks=1 00:08:57.841 00:08:57.841 ' 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:57.841 16:14:47 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:57.841 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:57.841 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:08:57.842 16:14:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.377 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:00.378 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:00.378 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:00.378 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:00.378 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:00.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:09:00.378 00:09:00.378 --- 10.0.0.2 ping statistics --- 00:09:00.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.378 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:09:00.378 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.378 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.378 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.153 ms 00:09:00.379 00:09:00.379 --- 10.0.0.1 ping statistics --- 00:09:00.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.379 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=125036 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 125036 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125036 ']' 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.379 [2024-11-19 16:14:50.386300] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:09:00.379 [2024-11-19 16:14:50.386394] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.379 [2024-11-19 16:14:50.459570] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.379 [2024-11-19 16:14:50.505026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.379 [2024-11-19 16:14:50.505105] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.379 [2024-11-19 16:14:50.505129] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.379 [2024-11-19 16:14:50.505140] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.379 [2024-11-19 16:14:50.505150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.379 [2024-11-19 16:14:50.506792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.379 [2024-11-19 16:14:50.506866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:00.379 [2024-11-19 16:14:50.506924] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:00.379 [2024-11-19 16:14:50.506927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.379 [2024-11-19 16:14:50.660640] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.379 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.638 Malloc0 00:09:00.638 [2024-11-19 16:14:50.744718] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=125086 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 125086 /var/tmp/bdevperf.sock 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 125086 ']' 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:00.638 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:00.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:00.639 { 00:09:00.639 "params": { 00:09:00.639 "name": "Nvme$subsystem", 00:09:00.639 "trtype": "$TEST_TRANSPORT", 00:09:00.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:00.639 "adrfam": "ipv4", 00:09:00.639 "trsvcid": "$NVMF_PORT", 00:09:00.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:00.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:00.639 "hdgst": ${hdgst:-false}, 00:09:00.639 "ddgst": ${ddgst:-false} 00:09:00.639 }, 00:09:00.639 "method": "bdev_nvme_attach_controller" 00:09:00.639 } 00:09:00.639 EOF 00:09:00.639 )") 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:00.639 16:14:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:00.639 "params": { 00:09:00.639 "name": "Nvme0", 00:09:00.639 "trtype": "tcp", 00:09:00.639 "traddr": "10.0.0.2", 00:09:00.639 "adrfam": "ipv4", 00:09:00.639 "trsvcid": "4420", 00:09:00.639 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:00.639 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:00.639 "hdgst": false, 00:09:00.639 "ddgst": false 00:09:00.639 }, 00:09:00.639 "method": "bdev_nvme_attach_controller" 00:09:00.639 }' 00:09:00.639 [2024-11-19 16:14:50.823046] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:09:00.639 [2024-11-19 16:14:50.823169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125086 ] 00:09:00.639 [2024-11-19 16:14:50.897409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.639 [2024-11-19 16:14:50.944927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.898 Running I/O for 10 seconds... 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:00.898 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.158 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:09:01.158 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:09:01.158 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=545 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 545 -ge 100 ']' 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.421 [2024-11-19 16:14:51.561982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.421 [2024-11-19 16:14:51.562056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.421 [2024-11-19 16:14:51.562084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.421 [2024-11-19 16:14:51.562101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.421 [2024-11-19 16:14:51.562116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.421 [2024-11-19 16:14:51.562140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.421 [2024-11-19 16:14:51.562157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:01.421 [2024-11-19 16:14:51.562170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.421 [2024-11-19 16:14:51.562184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x973d70 is same with the state(6) to be set 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.421 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:01.421 [2024-11-19 16:14:51.571213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.422 [2024-11-19 16:14:51.571516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 16:14:51 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:01.422 [2024-11-19 16:14:51.571656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.571982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.571998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.422 [2024-11-19 16:14:51.572456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.422 [2024-11-19 16:14:51.572471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.572984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.572998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.573014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.573028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.573043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.573080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.573099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.573124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.573139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.573154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.573170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.573184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.573200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.573215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.573231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.573245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.573261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:01.423 [2024-11-19 16:14:51.573275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:01.423 [2024-11-19 16:14:51.573414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x973d70 (9): Bad file descriptor 00:09:01.423 [2024-11-19 16:14:51.574534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:01.423 task offset: 81920 on job bdev=Nvme0n1 fails 00:09:01.423 00:09:01.423 Latency(us) 00:09:01.423 [2024-11-19T15:14:51.762Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.423 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:01.423 Job: Nvme0n1 ended in about 0.41 seconds with error 00:09:01.423 Verification LBA range: start 0x0 length 0x400 00:09:01.423 Nvme0n1 : 0.41 1554.45 97.15 155.44 0.00 36377.15 2560.76 35535.08 00:09:01.423 [2024-11-19T15:14:51.762Z] =================================================================================================================== 00:09:01.423 [2024-11-19T15:14:51.762Z] Total : 1554.45 97.15 155.44 0.00 36377.15 2560.76 35535.08 00:09:01.423 [2024-11-19 16:14:51.576414] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:01.423 [2024-11-19 16:14:51.589596] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 125086 00:09:02.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (125086) - No such process 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:02.364 { 00:09:02.364 "params": { 00:09:02.364 "name": "Nvme$subsystem", 00:09:02.364 "trtype": "$TEST_TRANSPORT", 00:09:02.364 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:02.364 "adrfam": "ipv4", 00:09:02.364 "trsvcid": "$NVMF_PORT", 00:09:02.364 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:02.364 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:02.364 "hdgst": ${hdgst:-false}, 00:09:02.364 "ddgst": ${ddgst:-false} 00:09:02.364 }, 00:09:02.364 "method": "bdev_nvme_attach_controller" 00:09:02.364 } 00:09:02.364 EOF 00:09:02.364 )") 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:09:02.364 16:14:52 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:02.365 "params": { 00:09:02.365 "name": "Nvme0", 00:09:02.365 "trtype": "tcp", 00:09:02.365 "traddr": "10.0.0.2", 00:09:02.365 "adrfam": "ipv4", 00:09:02.365 "trsvcid": "4420", 00:09:02.365 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:02.365 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:02.365 "hdgst": false, 00:09:02.365 "ddgst": false 00:09:02.365 }, 00:09:02.365 "method": "bdev_nvme_attach_controller" 00:09:02.365 }' 00:09:02.365 [2024-11-19 16:14:52.622994] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:09:02.365 [2024-11-19 16:14:52.623103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid125359 ] 00:09:02.365 [2024-11-19 16:14:52.693878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.626 [2024-11-19 16:14:52.741741] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.626 Running I/O for 1 seconds... 00:09:04.013 1664.00 IOPS, 104.00 MiB/s 00:09:04.013 Latency(us) 00:09:04.013 [2024-11-19T15:14:54.352Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:04.013 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:04.013 Verification LBA range: start 0x0 length 0x400 00:09:04.013 Nvme0n1 : 1.03 1682.01 105.13 0.00 0.00 37435.41 5558.42 33399.09 00:09:04.013 [2024-11-19T15:14:54.352Z] =================================================================================================================== 00:09:04.013 [2024-11-19T15:14:54.352Z] Total : 1682.01 105.13 0.00 0.00 37435.41 5558.42 33399.09 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:04.013 rmmod nvme_tcp 00:09:04.013 rmmod nvme_fabrics 00:09:04.013 rmmod nvme_keyring 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 125036 ']' 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 125036 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 125036 ']' 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 125036 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 125036 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 125036' 00:09:04.013 killing process with pid 125036 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 125036 00:09:04.013 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 125036 00:09:04.273 [2024-11-19 16:14:54.454356] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:04.273 16:14:54 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.815 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:06.815 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:06.815 00:09:06.815 real 0m8.681s 00:09:06.815 user 0m18.875s 00:09:06.815 sys 0m2.795s 00:09:06.815 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.815 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:06.815 ************************************ 00:09:06.815 END TEST nvmf_host_management 00:09:06.815 ************************************ 00:09:06.815 16:14:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:06.816 ************************************ 00:09:06.816 START TEST nvmf_lvol 00:09:06.816 ************************************ 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:06.816 * Looking for test storage... 00:09:06.816 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:06.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.816 --rc genhtml_branch_coverage=1 00:09:06.816 --rc genhtml_function_coverage=1 00:09:06.816 --rc genhtml_legend=1 00:09:06.816 --rc geninfo_all_blocks=1 00:09:06.816 --rc geninfo_unexecuted_blocks=1 00:09:06.816 00:09:06.816 ' 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:06.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.816 --rc genhtml_branch_coverage=1 00:09:06.816 --rc genhtml_function_coverage=1 00:09:06.816 --rc genhtml_legend=1 00:09:06.816 --rc geninfo_all_blocks=1 00:09:06.816 --rc geninfo_unexecuted_blocks=1 00:09:06.816 00:09:06.816 ' 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:06.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.816 --rc genhtml_branch_coverage=1 00:09:06.816 --rc genhtml_function_coverage=1 00:09:06.816 --rc genhtml_legend=1 00:09:06.816 --rc geninfo_all_blocks=1 00:09:06.816 --rc geninfo_unexecuted_blocks=1 00:09:06.816 00:09:06.816 ' 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:06.816 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.816 --rc genhtml_branch_coverage=1 00:09:06.816 --rc genhtml_function_coverage=1 00:09:06.816 --rc genhtml_legend=1 00:09:06.816 --rc geninfo_all_blocks=1 00:09:06.816 --rc geninfo_unexecuted_blocks=1 00:09:06.816 00:09:06.816 ' 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:06.816 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:06.816 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:09:06.817 16:14:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:08.728 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:08.728 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:08.728 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:08.729 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:08.729 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:08.729 16:14:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:08.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:08.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:09:08.988 00:09:08.988 --- 10.0.0.2 ping statistics --- 00:09:08.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.988 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:08.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:08.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:09:08.988 00:09:08.988 --- 10.0.0.1 ping statistics --- 00:09:08.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:08.988 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=127572 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 127572 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 127572 ']' 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:08.988 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:08.988 [2024-11-19 16:14:59.231315] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:09:08.988 [2024-11-19 16:14:59.231421] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:08.988 [2024-11-19 16:14:59.301936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:09.247 [2024-11-19 16:14:59.345929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.247 [2024-11-19 16:14:59.345983] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.247 [2024-11-19 16:14:59.346005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.247 [2024-11-19 16:14:59.346015] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.247 [2024-11-19 16:14:59.346025] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.247 [2024-11-19 16:14:59.347339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.247 [2024-11-19 16:14:59.347471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.247 [2024-11-19 16:14:59.347474] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.247 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:09.247 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:09.247 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:09.247 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:09.247 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:09.247 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:09.247 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:09.505 [2024-11-19 16:14:59.720232] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:09.505 16:14:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:09.764 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:09.764 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:10.333 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:10.333 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:10.592 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:10.851 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b86ec852-a578-481b-a70f-ef6acb7e9332 00:09:10.851 16:15:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b86ec852-a578-481b-a70f-ef6acb7e9332 lvol 20 00:09:11.110 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=71eba12a-ccd4-4f8e-b24d-a07b1e39f7be 00:09:11.110 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:11.369 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 71eba12a-ccd4-4f8e-b24d-a07b1e39f7be 00:09:11.627 16:15:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:11.886 [2024-11-19 16:15:02.087543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:11.887 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.146 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=128114 00:09:12.146 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:12.146 16:15:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:13.087 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 71eba12a-ccd4-4f8e-b24d-a07b1e39f7be MY_SNAPSHOT 00:09:13.660 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=94f6af78-29dc-4689-8507-b0724d1e4a17 00:09:13.660 16:15:03 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 71eba12a-ccd4-4f8e-b24d-a07b1e39f7be 30 00:09:13.919 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 94f6af78-29dc-4689-8507-b0724d1e4a17 MY_CLONE 00:09:14.177 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=18b2d953-9723-42b6-a674-02656a1bbda9 00:09:14.177 16:15:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 18b2d953-9723-42b6-a674-02656a1bbda9 00:09:14.747 16:15:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 128114 00:09:22.898 Initializing NVMe Controllers 00:09:22.898 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:22.898 Controller IO queue size 128, less than required. 00:09:22.898 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:22.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:22.898 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:22.898 Initialization complete. Launching workers. 00:09:22.898 ======================================================== 00:09:22.898 Latency(us) 00:09:22.898 Device Information : IOPS MiB/s Average min max 00:09:22.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10124.40 39.55 12645.61 1405.79 75345.02 00:09:22.898 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10388.50 40.58 12328.77 2173.48 68410.30 00:09:22.898 ======================================================== 00:09:22.898 Total : 20512.90 80.13 12485.15 1405.79 75345.02 00:09:22.898 00:09:22.898 16:15:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:22.898 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 71eba12a-ccd4-4f8e-b24d-a07b1e39f7be 00:09:23.158 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b86ec852-a578-481b-a70f-ef6acb7e9332 00:09:23.418 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:23.418 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:23.418 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:23.418 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.418 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:09:23.418 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.418 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:09:23.418 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.418 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.418 rmmod nvme_tcp 00:09:23.418 rmmod nvme_fabrics 00:09:23.418 rmmod nvme_keyring 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 127572 ']' 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 127572 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 127572 ']' 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 127572 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 127572 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 127572' 00:09:23.419 killing process with pid 127572 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 127572 00:09:23.419 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 127572 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.678 16:15:13 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:26.227 00:09:26.227 real 0m19.447s 00:09:26.227 user 1m5.844s 00:09:26.227 sys 0m5.678s 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:26.227 ************************************ 00:09:26.227 END TEST nvmf_lvol 00:09:26.227 ************************************ 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:26.227 ************************************ 00:09:26.227 START TEST nvmf_lvs_grow 00:09:26.227 ************************************ 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:26.227 * Looking for test storage... 00:09:26.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:26.227 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.228 --rc genhtml_branch_coverage=1 00:09:26.228 --rc genhtml_function_coverage=1 00:09:26.228 --rc genhtml_legend=1 00:09:26.228 --rc geninfo_all_blocks=1 00:09:26.228 --rc geninfo_unexecuted_blocks=1 00:09:26.228 00:09:26.228 ' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.228 --rc genhtml_branch_coverage=1 00:09:26.228 --rc genhtml_function_coverage=1 00:09:26.228 --rc genhtml_legend=1 00:09:26.228 --rc geninfo_all_blocks=1 00:09:26.228 --rc geninfo_unexecuted_blocks=1 00:09:26.228 00:09:26.228 ' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.228 --rc genhtml_branch_coverage=1 00:09:26.228 --rc genhtml_function_coverage=1 00:09:26.228 --rc genhtml_legend=1 00:09:26.228 --rc geninfo_all_blocks=1 00:09:26.228 --rc geninfo_unexecuted_blocks=1 00:09:26.228 00:09:26.228 ' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:26.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:26.228 --rc genhtml_branch_coverage=1 00:09:26.228 --rc genhtml_function_coverage=1 00:09:26.228 --rc genhtml_legend=1 00:09:26.228 --rc geninfo_all_blocks=1 00:09:26.228 --rc geninfo_unexecuted_blocks=1 00:09:26.228 00:09:26.228 ' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:26.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:09:26.228 16:15:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:28.136 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:28.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:28.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:28.137 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:28.137 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:28.137 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:28.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:28.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:09:28.399 00:09:28.399 --- 10.0.0.2 ping statistics --- 00:09:28.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.399 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:28.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:28.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:09:28.399 00:09:28.399 --- 10.0.0.1 ping statistics --- 00:09:28.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:28.399 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=131905 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 131905 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 131905 ']' 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:28.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:28.399 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.399 [2024-11-19 16:15:18.609171] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:09:28.399 [2024-11-19 16:15:18.609250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:28.399 [2024-11-19 16:15:18.682525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.399 [2024-11-19 16:15:18.730226] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:28.399 [2024-11-19 16:15:18.730285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:28.399 [2024-11-19 16:15:18.730300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:28.399 [2024-11-19 16:15:18.730311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:28.399 [2024-11-19 16:15:18.730321] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:28.399 [2024-11-19 16:15:18.730958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.659 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:28.659 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:28.659 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:28.659 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:28.659 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.659 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.659 16:15:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:28.918 [2024-11-19 16:15:19.124952] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:28.918 ************************************ 00:09:28.918 START TEST lvs_grow_clean 00:09:28.918 ************************************ 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:28.918 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:29.178 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:29.178 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:29.438 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:29.438 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:29.438 16:15:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:29.697 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:29.697 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:29.697 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 lvol 150 00:09:29.958 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=81763675-7f1c-42fd-8711-0257386bf7af 00:09:29.958 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:29.958 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:30.528 [2024-11-19 16:15:20.557602] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:30.528 [2024-11-19 16:15:20.557687] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:30.528 true 00:09:30.528 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:30.528 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:30.528 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:30.528 16:15:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:30.788 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 81763675-7f1c-42fd-8711-0257386bf7af 00:09:31.049 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:31.310 [2024-11-19 16:15:21.640928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.571 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.832 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=132343 00:09:31.832 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.832 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 132343 /var/tmp/bdevperf.sock 00:09:31.832 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 132343 ']' 00:09:31.832 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:31.832 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:31.832 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.832 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:31.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:31.832 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.832 16:15:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:31.832 [2024-11-19 16:15:21.964675] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:09:31.832 [2024-11-19 16:15:21.964764] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid132343 ] 00:09:31.832 [2024-11-19 16:15:22.032854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.832 [2024-11-19 16:15:22.081888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:32.092 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.092 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:32.092 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:32.350 Nvme0n1 00:09:32.350 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:32.608 [ 00:09:32.608 { 00:09:32.608 "name": "Nvme0n1", 00:09:32.608 "aliases": [ 00:09:32.608 "81763675-7f1c-42fd-8711-0257386bf7af" 00:09:32.608 ], 00:09:32.608 "product_name": "NVMe disk", 00:09:32.608 "block_size": 4096, 00:09:32.608 "num_blocks": 38912, 00:09:32.608 "uuid": "81763675-7f1c-42fd-8711-0257386bf7af", 00:09:32.608 "numa_id": 0, 00:09:32.608 "assigned_rate_limits": { 00:09:32.608 "rw_ios_per_sec": 0, 00:09:32.608 "rw_mbytes_per_sec": 0, 00:09:32.608 "r_mbytes_per_sec": 0, 00:09:32.608 "w_mbytes_per_sec": 0 00:09:32.608 }, 00:09:32.608 "claimed": false, 00:09:32.608 "zoned": false, 00:09:32.608 "supported_io_types": { 00:09:32.608 "read": true, 00:09:32.608 "write": true, 00:09:32.608 "unmap": true, 00:09:32.608 "flush": true, 00:09:32.608 "reset": true, 00:09:32.608 "nvme_admin": true, 00:09:32.608 "nvme_io": true, 00:09:32.608 "nvme_io_md": false, 00:09:32.608 "write_zeroes": true, 00:09:32.608 "zcopy": false, 00:09:32.608 "get_zone_info": false, 00:09:32.608 "zone_management": false, 00:09:32.608 "zone_append": false, 00:09:32.608 "compare": true, 00:09:32.608 "compare_and_write": true, 00:09:32.608 "abort": true, 00:09:32.608 "seek_hole": false, 00:09:32.608 "seek_data": false, 00:09:32.608 "copy": true, 00:09:32.608 "nvme_iov_md": false 00:09:32.608 }, 00:09:32.608 "memory_domains": [ 00:09:32.608 { 00:09:32.608 "dma_device_id": "system", 00:09:32.608 "dma_device_type": 1 00:09:32.608 } 00:09:32.608 ], 00:09:32.608 "driver_specific": { 00:09:32.608 "nvme": [ 00:09:32.608 { 00:09:32.608 "trid": { 00:09:32.608 "trtype": "TCP", 00:09:32.608 "adrfam": "IPv4", 00:09:32.608 "traddr": "10.0.0.2", 00:09:32.608 "trsvcid": "4420", 00:09:32.608 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:32.608 }, 00:09:32.608 "ctrlr_data": { 00:09:32.608 "cntlid": 1, 00:09:32.608 "vendor_id": "0x8086", 00:09:32.608 "model_number": "SPDK bdev Controller", 00:09:32.608 "serial_number": "SPDK0", 00:09:32.608 "firmware_revision": "25.01", 00:09:32.608 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:32.608 "oacs": { 00:09:32.608 "security": 0, 00:09:32.608 "format": 0, 00:09:32.608 "firmware": 0, 00:09:32.608 "ns_manage": 0 00:09:32.608 }, 00:09:32.608 "multi_ctrlr": true, 00:09:32.608 "ana_reporting": false 00:09:32.608 }, 00:09:32.608 "vs": { 00:09:32.608 "nvme_version": "1.3" 00:09:32.608 }, 00:09:32.608 "ns_data": { 00:09:32.608 "id": 1, 00:09:32.608 "can_share": true 00:09:32.608 } 00:09:32.608 } 00:09:32.608 ], 00:09:32.608 "mp_policy": "active_passive" 00:09:32.608 } 00:09:32.608 } 00:09:32.608 ] 00:09:32.608 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=132479 00:09:32.608 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:32.608 16:15:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:32.866 Running I/O for 10 seconds... 00:09:33.806 Latency(us) 00:09:33.806 [2024-11-19T15:15:24.145Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:33.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:33.806 Nvme0n1 : 1.00 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:09:33.806 [2024-11-19T15:15:24.145Z] =================================================================================================================== 00:09:33.806 [2024-11-19T15:15:24.145Z] Total : 14987.00 58.54 0.00 0.00 0.00 0.00 0.00 00:09:33.806 00:09:34.744 16:15:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:34.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.744 Nvme0n1 : 2.00 15177.00 59.29 0.00 0.00 0.00 0.00 0.00 00:09:34.744 [2024-11-19T15:15:25.083Z] =================================================================================================================== 00:09:34.744 [2024-11-19T15:15:25.083Z] Total : 15177.00 59.29 0.00 0.00 0.00 0.00 0.00 00:09:34.744 00:09:35.003 true 00:09:35.003 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:35.003 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:35.264 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:35.264 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:35.264 16:15:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 132479 00:09:35.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.833 Nvme0n1 : 3.00 15240.33 59.53 0.00 0.00 0.00 0.00 0.00 00:09:35.833 [2024-11-19T15:15:26.172Z] =================================================================================================================== 00:09:35.833 [2024-11-19T15:15:26.172Z] Total : 15240.33 59.53 0.00 0.00 0.00 0.00 0.00 00:09:35.833 00:09:36.777 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.777 Nvme0n1 : 4.00 15335.50 59.90 0.00 0.00 0.00 0.00 0.00 00:09:36.777 [2024-11-19T15:15:27.116Z] =================================================================================================================== 00:09:36.777 [2024-11-19T15:15:27.116Z] Total : 15335.50 59.90 0.00 0.00 0.00 0.00 0.00 00:09:36.777 00:09:37.718 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.718 Nvme0n1 : 5.00 15392.60 60.13 0.00 0.00 0.00 0.00 0.00 00:09:37.718 [2024-11-19T15:15:28.057Z] =================================================================================================================== 00:09:37.718 [2024-11-19T15:15:28.057Z] Total : 15392.60 60.13 0.00 0.00 0.00 0.00 0.00 00:09:37.718 00:09:39.101 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:39.101 Nvme0n1 : 6.00 15451.83 60.36 0.00 0.00 0.00 0.00 0.00 00:09:39.101 [2024-11-19T15:15:29.440Z] =================================================================================================================== 00:09:39.101 [2024-11-19T15:15:29.440Z] Total : 15451.83 60.36 0.00 0.00 0.00 0.00 0.00 00:09:39.101 00:09:40.042 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.042 Nvme0n1 : 7.00 15494.14 60.52 0.00 0.00 0.00 0.00 0.00 00:09:40.042 [2024-11-19T15:15:30.381Z] =================================================================================================================== 00:09:40.042 [2024-11-19T15:15:30.381Z] Total : 15494.14 60.52 0.00 0.00 0.00 0.00 0.00 00:09:40.042 00:09:40.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:40.983 Nvme0n1 : 8.00 15486.38 60.49 0.00 0.00 0.00 0.00 0.00 00:09:40.983 [2024-11-19T15:15:31.322Z] =================================================================================================================== 00:09:40.984 [2024-11-19T15:15:31.323Z] Total : 15486.38 60.49 0.00 0.00 0.00 0.00 0.00 00:09:40.984 00:09:41.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:41.920 Nvme0n1 : 9.00 15515.44 60.61 0.00 0.00 0.00 0.00 0.00 00:09:41.920 [2024-11-19T15:15:32.259Z] =================================================================================================================== 00:09:41.920 [2024-11-19T15:15:32.259Z] Total : 15515.44 60.61 0.00 0.00 0.00 0.00 0.00 00:09:41.920 00:09:42.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.865 Nvme0n1 : 10.00 15538.70 60.70 0.00 0.00 0.00 0.00 0.00 00:09:42.865 [2024-11-19T15:15:33.204Z] =================================================================================================================== 00:09:42.865 [2024-11-19T15:15:33.204Z] Total : 15538.70 60.70 0.00 0.00 0.00 0.00 0.00 00:09:42.865 00:09:42.865 00:09:42.865 Latency(us) 00:09:42.865 [2024-11-19T15:15:33.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.865 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:42.865 Nvme0n1 : 10.01 15540.91 60.71 0.00 0.00 8231.88 5194.33 16311.18 00:09:42.865 [2024-11-19T15:15:33.204Z] =================================================================================================================== 00:09:42.865 [2024-11-19T15:15:33.204Z] Total : 15540.91 60.71 0.00 0.00 8231.88 5194.33 16311.18 00:09:42.865 { 00:09:42.865 "results": [ 00:09:42.865 { 00:09:42.865 "job": "Nvme0n1", 00:09:42.865 "core_mask": "0x2", 00:09:42.865 "workload": "randwrite", 00:09:42.865 "status": "finished", 00:09:42.865 "queue_depth": 128, 00:09:42.865 "io_size": 4096, 00:09:42.865 "runtime": 10.006812, 00:09:42.865 "iops": 15540.913529703566, 00:09:42.865 "mibps": 60.706693475404556, 00:09:42.865 "io_failed": 0, 00:09:42.865 "io_timeout": 0, 00:09:42.865 "avg_latency_us": 8231.87711561943, 00:09:42.865 "min_latency_us": 5194.334814814815, 00:09:42.865 "max_latency_us": 16311.182222222222 00:09:42.865 } 00:09:42.865 ], 00:09:42.865 "core_count": 1 00:09:42.865 } 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 132343 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 132343 ']' 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 132343 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 132343 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 132343' 00:09:42.865 killing process with pid 132343 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 132343 00:09:42.865 Received shutdown signal, test time was about 10.000000 seconds 00:09:42.865 00:09:42.865 Latency(us) 00:09:42.865 [2024-11-19T15:15:33.204Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.865 [2024-11-19T15:15:33.204Z] =================================================================================================================== 00:09:42.865 [2024-11-19T15:15:33.204Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:42.865 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 132343 00:09:43.125 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.384 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:43.642 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:43.642 16:15:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:43.903 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:43.903 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:43.903 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:44.164 [2024-11-19 16:15:34.378892] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:44.164 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:44.164 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:44.165 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:44.165 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.165 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.165 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.165 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.165 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.165 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:44.165 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:44.165 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:44.165 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:44.423 request: 00:09:44.423 { 00:09:44.423 "uuid": "1f8847c0-d8e7-4213-8ae7-592d338cb580", 00:09:44.423 "method": "bdev_lvol_get_lvstores", 00:09:44.423 "req_id": 1 00:09:44.423 } 00:09:44.423 Got JSON-RPC error response 00:09:44.423 response: 00:09:44.423 { 00:09:44.423 "code": -19, 00:09:44.423 "message": "No such device" 00:09:44.423 } 00:09:44.423 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:09:44.423 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:44.423 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:44.423 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:44.423 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:44.683 aio_bdev 00:09:44.683 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 81763675-7f1c-42fd-8711-0257386bf7af 00:09:44.683 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=81763675-7f1c-42fd-8711-0257386bf7af 00:09:44.683 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:44.683 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:09:44.683 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:44.683 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:44.683 16:15:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:44.945 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 81763675-7f1c-42fd-8711-0257386bf7af -t 2000 00:09:45.203 [ 00:09:45.203 { 00:09:45.203 "name": "81763675-7f1c-42fd-8711-0257386bf7af", 00:09:45.203 "aliases": [ 00:09:45.203 "lvs/lvol" 00:09:45.203 ], 00:09:45.203 "product_name": "Logical Volume", 00:09:45.203 "block_size": 4096, 00:09:45.203 "num_blocks": 38912, 00:09:45.203 "uuid": "81763675-7f1c-42fd-8711-0257386bf7af", 00:09:45.203 "assigned_rate_limits": { 00:09:45.203 "rw_ios_per_sec": 0, 00:09:45.203 "rw_mbytes_per_sec": 0, 00:09:45.203 "r_mbytes_per_sec": 0, 00:09:45.203 "w_mbytes_per_sec": 0 00:09:45.203 }, 00:09:45.203 "claimed": false, 00:09:45.203 "zoned": false, 00:09:45.203 "supported_io_types": { 00:09:45.203 "read": true, 00:09:45.203 "write": true, 00:09:45.203 "unmap": true, 00:09:45.203 "flush": false, 00:09:45.203 "reset": true, 00:09:45.203 "nvme_admin": false, 00:09:45.203 "nvme_io": false, 00:09:45.203 "nvme_io_md": false, 00:09:45.203 "write_zeroes": true, 00:09:45.203 "zcopy": false, 00:09:45.203 "get_zone_info": false, 00:09:45.203 "zone_management": false, 00:09:45.203 "zone_append": false, 00:09:45.203 "compare": false, 00:09:45.203 "compare_and_write": false, 00:09:45.203 "abort": false, 00:09:45.203 "seek_hole": true, 00:09:45.203 "seek_data": true, 00:09:45.203 "copy": false, 00:09:45.203 "nvme_iov_md": false 00:09:45.203 }, 00:09:45.203 "driver_specific": { 00:09:45.203 "lvol": { 00:09:45.203 "lvol_store_uuid": "1f8847c0-d8e7-4213-8ae7-592d338cb580", 00:09:45.203 "base_bdev": "aio_bdev", 00:09:45.203 "thin_provision": false, 00:09:45.203 "num_allocated_clusters": 38, 00:09:45.203 "snapshot": false, 00:09:45.203 "clone": false, 00:09:45.203 "esnap_clone": false 00:09:45.203 } 00:09:45.203 } 00:09:45.203 } 00:09:45.203 ] 00:09:45.203 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:09:45.203 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:45.203 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:45.461 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:45.461 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:45.462 16:15:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:45.720 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:45.720 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 81763675-7f1c-42fd-8711-0257386bf7af 00:09:45.981 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1f8847c0-d8e7-4213-8ae7-592d338cb580 00:09:46.551 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:46.551 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.551 00:09:46.551 real 0m17.716s 00:09:46.551 user 0m17.229s 00:09:46.551 sys 0m1.856s 00:09:46.551 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:46.551 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:46.551 ************************************ 00:09:46.810 END TEST lvs_grow_clean 00:09:46.810 ************************************ 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:46.810 ************************************ 00:09:46.810 START TEST lvs_grow_dirty 00:09:46.810 ************************************ 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.810 16:15:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:47.070 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:47.070 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:47.329 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=8223c48e-971e-43e1-af47-843b8041acf5 00:09:47.329 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:09:47.329 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:47.590 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:47.590 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:47.590 16:15:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8223c48e-971e-43e1-af47-843b8041acf5 lvol 150 00:09:47.849 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=4ff02296-e48b-4c49-bd71-f1bab7fc1a71 00:09:47.849 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:47.849 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:48.109 [2024-11-19 16:15:38.290411] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:48.109 [2024-11-19 16:15:38.290505] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:48.109 true 00:09:48.109 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:09:48.109 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:48.368 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:48.368 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:48.627 16:15:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4ff02296-e48b-4c49-bd71-f1bab7fc1a71 00:09:48.886 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:49.144 [2024-11-19 16:15:39.361603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.145 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:49.403 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=134529 00:09:49.403 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:49.403 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:49.403 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 134529 /var/tmp/bdevperf.sock 00:09:49.403 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 134529 ']' 00:09:49.403 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:49.403 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:49.403 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:49.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:49.403 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:49.403 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:49.403 [2024-11-19 16:15:39.686601] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:09:49.403 [2024-11-19 16:15:39.686674] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid134529 ] 00:09:49.662 [2024-11-19 16:15:39.752874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.662 [2024-11-19 16:15:39.797565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.662 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.662 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:09:49.662 16:15:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:50.231 Nvme0n1 00:09:50.231 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:50.493 [ 00:09:50.493 { 00:09:50.493 "name": "Nvme0n1", 00:09:50.493 "aliases": [ 00:09:50.493 "4ff02296-e48b-4c49-bd71-f1bab7fc1a71" 00:09:50.493 ], 00:09:50.493 "product_name": "NVMe disk", 00:09:50.493 "block_size": 4096, 00:09:50.493 "num_blocks": 38912, 00:09:50.493 "uuid": "4ff02296-e48b-4c49-bd71-f1bab7fc1a71", 00:09:50.493 "numa_id": 0, 00:09:50.493 "assigned_rate_limits": { 00:09:50.493 "rw_ios_per_sec": 0, 00:09:50.493 "rw_mbytes_per_sec": 0, 00:09:50.493 "r_mbytes_per_sec": 0, 00:09:50.493 "w_mbytes_per_sec": 0 00:09:50.493 }, 00:09:50.493 "claimed": false, 00:09:50.493 "zoned": false, 00:09:50.493 "supported_io_types": { 00:09:50.493 "read": true, 00:09:50.493 "write": true, 00:09:50.493 "unmap": true, 00:09:50.493 "flush": true, 00:09:50.493 "reset": true, 00:09:50.493 "nvme_admin": true, 00:09:50.493 "nvme_io": true, 00:09:50.493 "nvme_io_md": false, 00:09:50.493 "write_zeroes": true, 00:09:50.493 "zcopy": false, 00:09:50.493 "get_zone_info": false, 00:09:50.493 "zone_management": false, 00:09:50.493 "zone_append": false, 00:09:50.493 "compare": true, 00:09:50.493 "compare_and_write": true, 00:09:50.493 "abort": true, 00:09:50.493 "seek_hole": false, 00:09:50.493 "seek_data": false, 00:09:50.493 "copy": true, 00:09:50.493 "nvme_iov_md": false 00:09:50.493 }, 00:09:50.493 "memory_domains": [ 00:09:50.493 { 00:09:50.493 "dma_device_id": "system", 00:09:50.493 "dma_device_type": 1 00:09:50.493 } 00:09:50.493 ], 00:09:50.493 "driver_specific": { 00:09:50.493 "nvme": [ 00:09:50.493 { 00:09:50.493 "trid": { 00:09:50.493 "trtype": "TCP", 00:09:50.493 "adrfam": "IPv4", 00:09:50.493 "traddr": "10.0.0.2", 00:09:50.493 "trsvcid": "4420", 00:09:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:50.493 }, 00:09:50.493 "ctrlr_data": { 00:09:50.493 "cntlid": 1, 00:09:50.493 "vendor_id": "0x8086", 00:09:50.493 "model_number": "SPDK bdev Controller", 00:09:50.493 "serial_number": "SPDK0", 00:09:50.493 "firmware_revision": "25.01", 00:09:50.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:50.493 "oacs": { 00:09:50.493 "security": 0, 00:09:50.493 "format": 0, 00:09:50.493 "firmware": 0, 00:09:50.493 "ns_manage": 0 00:09:50.493 }, 00:09:50.493 "multi_ctrlr": true, 00:09:50.493 "ana_reporting": false 00:09:50.493 }, 00:09:50.493 "vs": { 00:09:50.493 "nvme_version": "1.3" 00:09:50.493 }, 00:09:50.493 "ns_data": { 00:09:50.493 "id": 1, 00:09:50.493 "can_share": true 00:09:50.493 } 00:09:50.493 } 00:09:50.493 ], 00:09:50.493 "mp_policy": "active_passive" 00:09:50.493 } 00:09:50.493 } 00:09:50.493 ] 00:09:50.493 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=134632 00:09:50.493 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:50.493 16:15:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:50.493 Running I/O for 10 seconds... 00:09:51.437 Latency(us) 00:09:51.437 [2024-11-19T15:15:41.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:51.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.437 Nvme0n1 : 1.00 15008.00 58.62 0.00 0.00 0.00 0.00 0.00 00:09:51.437 [2024-11-19T15:15:41.776Z] =================================================================================================================== 00:09:51.437 [2024-11-19T15:15:41.776Z] Total : 15008.00 58.62 0.00 0.00 0.00 0.00 0.00 00:09:51.437 00:09:52.378 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 8223c48e-971e-43e1-af47-843b8041acf5 00:09:52.637 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:52.637 Nvme0n1 : 2.00 15195.50 59.36 0.00 0.00 0.00 0.00 0.00 00:09:52.637 [2024-11-19T15:15:42.976Z] =================================================================================================================== 00:09:52.637 [2024-11-19T15:15:42.976Z] Total : 15195.50 59.36 0.00 0.00 0.00 0.00 0.00 00:09:52.637 00:09:52.637 true 00:09:52.637 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:09:52.637 16:15:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:52.895 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:52.895 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:52.895 16:15:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 134632 00:09:53.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.467 Nvme0n1 : 3.00 15258.67 59.60 0.00 0.00 0.00 0.00 0.00 00:09:53.467 [2024-11-19T15:15:43.806Z] =================================================================================================================== 00:09:53.467 [2024-11-19T15:15:43.806Z] Total : 15258.67 59.60 0.00 0.00 0.00 0.00 0.00 00:09:53.467 00:09:54.849 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.850 Nvme0n1 : 4.00 15336.00 59.91 0.00 0.00 0.00 0.00 0.00 00:09:54.850 [2024-11-19T15:15:45.189Z] =================================================================================================================== 00:09:54.850 [2024-11-19T15:15:45.189Z] Total : 15336.00 59.91 0.00 0.00 0.00 0.00 0.00 00:09:54.850 00:09:55.421 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.421 Nvme0n1 : 5.00 15420.80 60.24 0.00 0.00 0.00 0.00 0.00 00:09:55.421 [2024-11-19T15:15:45.760Z] =================================================================================================================== 00:09:55.421 [2024-11-19T15:15:45.760Z] Total : 15420.80 60.24 0.00 0.00 0.00 0.00 0.00 00:09:55.421 00:09:56.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.807 Nvme0n1 : 6.00 15459.83 60.39 0.00 0.00 0.00 0.00 0.00 00:09:56.807 [2024-11-19T15:15:47.146Z] =================================================================================================================== 00:09:56.807 [2024-11-19T15:15:47.146Z] Total : 15459.83 60.39 0.00 0.00 0.00 0.00 0.00 00:09:56.807 00:09:57.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.745 Nvme0n1 : 7.00 15508.43 60.58 0.00 0.00 0.00 0.00 0.00 00:09:57.745 [2024-11-19T15:15:48.084Z] =================================================================================================================== 00:09:57.745 [2024-11-19T15:15:48.084Z] Total : 15508.43 60.58 0.00 0.00 0.00 0.00 0.00 00:09:57.745 00:09:58.684 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.684 Nvme0n1 : 8.00 15539.38 60.70 0.00 0.00 0.00 0.00 0.00 00:09:58.684 [2024-11-19T15:15:49.023Z] =================================================================================================================== 00:09:58.684 [2024-11-19T15:15:49.023Z] Total : 15539.38 60.70 0.00 0.00 0.00 0.00 0.00 00:09:58.684 00:09:59.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:59.622 Nvme0n1 : 9.00 15570.33 60.82 0.00 0.00 0.00 0.00 0.00 00:09:59.622 [2024-11-19T15:15:49.961Z] =================================================================================================================== 00:09:59.622 [2024-11-19T15:15:49.961Z] Total : 15570.33 60.82 0.00 0.00 0.00 0.00 0.00 00:09:59.622 00:10:00.563 00:10:00.563 Latency(us) 00:10:00.563 [2024-11-19T15:15:50.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.563 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:00.563 Nvme0n1 : 10.00 15571.58 60.83 0.00 0.00 8215.46 4805.97 15437.37 00:10:00.563 [2024-11-19T15:15:50.902Z] =================================================================================================================== 00:10:00.563 [2024-11-19T15:15:50.902Z] Total : 15571.58 60.83 0.00 0.00 8215.46 4805.97 15437.37 00:10:00.563 { 00:10:00.563 "results": [ 00:10:00.563 { 00:10:00.563 "job": "Nvme0n1", 00:10:00.563 "core_mask": "0x2", 00:10:00.563 "workload": "randwrite", 00:10:00.563 "status": "finished", 00:10:00.563 "queue_depth": 128, 00:10:00.563 "io_size": 4096, 00:10:00.563 "runtime": 10.002902, 00:10:00.563 "iops": 15571.581127156898, 00:10:00.563 "mibps": 60.826488777956634, 00:10:00.563 "io_failed": 0, 00:10:00.563 "io_timeout": 0, 00:10:00.563 "avg_latency_us": 8215.46296595425, 00:10:00.563 "min_latency_us": 4805.973333333333, 00:10:00.563 "max_latency_us": 15437.368888888888 00:10:00.563 } 00:10:00.563 ], 00:10:00.563 "core_count": 1 00:10:00.563 } 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 134529 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 134529 ']' 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 134529 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 134529 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 134529' 00:10:00.563 killing process with pid 134529 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 134529 00:10:00.563 Received shutdown signal, test time was about 10.000000 seconds 00:10:00.563 00:10:00.563 Latency(us) 00:10:00.563 [2024-11-19T15:15:50.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.563 [2024-11-19T15:15:50.902Z] =================================================================================================================== 00:10:00.563 [2024-11-19T15:15:50.902Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:00.563 16:15:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 134529 00:10:00.822 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:01.080 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:01.339 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:10:01.339 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:01.598 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:01.598 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:01.598 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 131905 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 131905 00:10:01.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 131905 Killed "${NVMF_APP[@]}" "$@" 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=135883 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 135883 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 135883 ']' 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:01.599 16:15:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.599 [2024-11-19 16:15:51.909416] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:01.599 [2024-11-19 16:15:51.909512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.857 [2024-11-19 16:15:51.985127] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:01.857 [2024-11-19 16:15:52.031453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.857 [2024-11-19 16:15:52.031507] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.857 [2024-11-19 16:15:52.031520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:01.857 [2024-11-19 16:15:52.031531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:01.857 [2024-11-19 16:15:52.031541] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.857 [2024-11-19 16:15:52.032149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.857 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:01.857 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:01.857 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:01.857 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:01.857 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:01.857 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:01.857 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:02.116 [2024-11-19 16:15:52.417212] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:02.116 [2024-11-19 16:15:52.417339] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:02.116 [2024-11-19 16:15:52.417400] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:02.116 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:02.116 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 4ff02296-e48b-4c49-bd71-f1bab7fc1a71 00:10:02.116 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4ff02296-e48b-4c49-bd71-f1bab7fc1a71 00:10:02.116 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:02.116 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:02.116 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:02.116 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:02.116 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:02.376 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ff02296-e48b-4c49-bd71-f1bab7fc1a71 -t 2000 00:10:02.635 [ 00:10:02.635 { 00:10:02.635 "name": "4ff02296-e48b-4c49-bd71-f1bab7fc1a71", 00:10:02.635 "aliases": [ 00:10:02.635 "lvs/lvol" 00:10:02.635 ], 00:10:02.635 "product_name": "Logical Volume", 00:10:02.635 "block_size": 4096, 00:10:02.635 "num_blocks": 38912, 00:10:02.635 "uuid": "4ff02296-e48b-4c49-bd71-f1bab7fc1a71", 00:10:02.635 "assigned_rate_limits": { 00:10:02.635 "rw_ios_per_sec": 0, 00:10:02.635 "rw_mbytes_per_sec": 0, 00:10:02.635 "r_mbytes_per_sec": 0, 00:10:02.635 "w_mbytes_per_sec": 0 00:10:02.635 }, 00:10:02.635 "claimed": false, 00:10:02.635 "zoned": false, 00:10:02.635 "supported_io_types": { 00:10:02.635 "read": true, 00:10:02.635 "write": true, 00:10:02.635 "unmap": true, 00:10:02.635 "flush": false, 00:10:02.635 "reset": true, 00:10:02.635 "nvme_admin": false, 00:10:02.635 "nvme_io": false, 00:10:02.635 "nvme_io_md": false, 00:10:02.635 "write_zeroes": true, 00:10:02.635 "zcopy": false, 00:10:02.635 "get_zone_info": false, 00:10:02.635 "zone_management": false, 00:10:02.635 "zone_append": false, 00:10:02.635 "compare": false, 00:10:02.635 "compare_and_write": false, 00:10:02.635 "abort": false, 00:10:02.635 "seek_hole": true, 00:10:02.635 "seek_data": true, 00:10:02.635 "copy": false, 00:10:02.635 "nvme_iov_md": false 00:10:02.635 }, 00:10:02.635 "driver_specific": { 00:10:02.635 "lvol": { 00:10:02.635 "lvol_store_uuid": "8223c48e-971e-43e1-af47-843b8041acf5", 00:10:02.635 "base_bdev": "aio_bdev", 00:10:02.635 "thin_provision": false, 00:10:02.635 "num_allocated_clusters": 38, 00:10:02.635 "snapshot": false, 00:10:02.635 "clone": false, 00:10:02.635 "esnap_clone": false 00:10:02.635 } 00:10:02.635 } 00:10:02.635 } 00:10:02.635 ] 00:10:02.635 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:02.635 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:10:02.635 16:15:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:03.204 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:03.204 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:10:03.204 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:03.204 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:03.204 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:03.465 [2024-11-19 16:15:53.755143] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:03.465 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:03.466 16:15:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:10:03.726 request: 00:10:03.726 { 00:10:03.726 "uuid": "8223c48e-971e-43e1-af47-843b8041acf5", 00:10:03.726 "method": "bdev_lvol_get_lvstores", 00:10:03.726 "req_id": 1 00:10:03.726 } 00:10:03.726 Got JSON-RPC error response 00:10:03.726 response: 00:10:03.726 { 00:10:03.726 "code": -19, 00:10:03.726 "message": "No such device" 00:10:03.726 } 00:10:03.726 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:03.726 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:03.726 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:03.726 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:03.726 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:03.986 aio_bdev 00:10:03.986 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4ff02296-e48b-4c49-bd71-f1bab7fc1a71 00:10:03.986 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=4ff02296-e48b-4c49-bd71-f1bab7fc1a71 00:10:03.986 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.986 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:03.986 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:04.247 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:04.247 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:04.508 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4ff02296-e48b-4c49-bd71-f1bab7fc1a71 -t 2000 00:10:04.767 [ 00:10:04.767 { 00:10:04.767 "name": "4ff02296-e48b-4c49-bd71-f1bab7fc1a71", 00:10:04.767 "aliases": [ 00:10:04.767 "lvs/lvol" 00:10:04.767 ], 00:10:04.767 "product_name": "Logical Volume", 00:10:04.767 "block_size": 4096, 00:10:04.767 "num_blocks": 38912, 00:10:04.767 "uuid": "4ff02296-e48b-4c49-bd71-f1bab7fc1a71", 00:10:04.767 "assigned_rate_limits": { 00:10:04.767 "rw_ios_per_sec": 0, 00:10:04.767 "rw_mbytes_per_sec": 0, 00:10:04.767 "r_mbytes_per_sec": 0, 00:10:04.767 "w_mbytes_per_sec": 0 00:10:04.767 }, 00:10:04.767 "claimed": false, 00:10:04.767 "zoned": false, 00:10:04.767 "supported_io_types": { 00:10:04.767 "read": true, 00:10:04.767 "write": true, 00:10:04.767 "unmap": true, 00:10:04.767 "flush": false, 00:10:04.767 "reset": true, 00:10:04.767 "nvme_admin": false, 00:10:04.767 "nvme_io": false, 00:10:04.767 "nvme_io_md": false, 00:10:04.768 "write_zeroes": true, 00:10:04.768 "zcopy": false, 00:10:04.768 "get_zone_info": false, 00:10:04.768 "zone_management": false, 00:10:04.768 "zone_append": false, 00:10:04.768 "compare": false, 00:10:04.768 "compare_and_write": false, 00:10:04.768 "abort": false, 00:10:04.768 "seek_hole": true, 00:10:04.768 "seek_data": true, 00:10:04.768 "copy": false, 00:10:04.768 "nvme_iov_md": false 00:10:04.768 }, 00:10:04.768 "driver_specific": { 00:10:04.768 "lvol": { 00:10:04.768 "lvol_store_uuid": "8223c48e-971e-43e1-af47-843b8041acf5", 00:10:04.768 "base_bdev": "aio_bdev", 00:10:04.768 "thin_provision": false, 00:10:04.768 "num_allocated_clusters": 38, 00:10:04.768 "snapshot": false, 00:10:04.768 "clone": false, 00:10:04.768 "esnap_clone": false 00:10:04.768 } 00:10:04.768 } 00:10:04.768 } 00:10:04.768 ] 00:10:04.768 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:04.768 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:10:04.768 16:15:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:05.026 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:05.026 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 8223c48e-971e-43e1-af47-843b8041acf5 00:10:05.026 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:05.285 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:05.285 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4ff02296-e48b-4c49-bd71-f1bab7fc1a71 00:10:05.544 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8223c48e-971e-43e1-af47-843b8041acf5 00:10:05.803 16:15:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:06.063 00:10:06.063 real 0m19.327s 00:10:06.063 user 0m48.067s 00:10:06.063 sys 0m5.088s 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:06.063 ************************************ 00:10:06.063 END TEST lvs_grow_dirty 00:10:06.063 ************************************ 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:06.063 nvmf_trace.0 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.063 rmmod nvme_tcp 00:10:06.063 rmmod nvme_fabrics 00:10:06.063 rmmod nvme_keyring 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 135883 ']' 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 135883 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 135883 ']' 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 135883 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:06.063 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 135883 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 135883' 00:10:06.324 killing process with pid 135883 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 135883 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 135883 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.324 16:15:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.869 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:08.869 00:10:08.869 real 0m42.566s 00:10:08.869 user 1m11.251s 00:10:08.869 sys 0m8.999s 00:10:08.869 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.869 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:08.869 ************************************ 00:10:08.869 END TEST nvmf_lvs_grow 00:10:08.869 ************************************ 00:10:08.869 16:15:58 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:08.869 16:15:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.869 16:15:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.869 16:15:58 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:08.869 ************************************ 00:10:08.869 START TEST nvmf_bdev_io_wait 00:10:08.869 ************************************ 00:10:08.869 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:08.869 * Looking for test storage... 00:10:08.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.869 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.870 --rc genhtml_branch_coverage=1 00:10:08.870 --rc genhtml_function_coverage=1 00:10:08.870 --rc genhtml_legend=1 00:10:08.870 --rc geninfo_all_blocks=1 00:10:08.870 --rc geninfo_unexecuted_blocks=1 00:10:08.870 00:10:08.870 ' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.870 --rc genhtml_branch_coverage=1 00:10:08.870 --rc genhtml_function_coverage=1 00:10:08.870 --rc genhtml_legend=1 00:10:08.870 --rc geninfo_all_blocks=1 00:10:08.870 --rc geninfo_unexecuted_blocks=1 00:10:08.870 00:10:08.870 ' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.870 --rc genhtml_branch_coverage=1 00:10:08.870 --rc genhtml_function_coverage=1 00:10:08.870 --rc genhtml_legend=1 00:10:08.870 --rc geninfo_all_blocks=1 00:10:08.870 --rc geninfo_unexecuted_blocks=1 00:10:08.870 00:10:08.870 ' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.870 --rc genhtml_branch_coverage=1 00:10:08.870 --rc genhtml_function_coverage=1 00:10:08.870 --rc genhtml_legend=1 00:10:08.870 --rc geninfo_all_blocks=1 00:10:08.870 --rc geninfo_unexecuted_blocks=1 00:10:08.870 00:10:08.870 ' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:08.870 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:08.870 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:10:08.871 16:15:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:10.779 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:10.779 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:10:10.779 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:10.779 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:10.779 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:10.779 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:10.779 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:10.780 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:10.780 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:10.780 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:10.780 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:10.780 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:11.039 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:11.039 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:11.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:11.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.268 ms 00:10:11.040 00:10:11.040 --- 10.0.0.2 ping statistics --- 00:10:11.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.040 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:11.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:11.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:10:11.040 00:10:11.040 --- 10.0.0.1 ping statistics --- 00:10:11.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:11.040 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=138543 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 138543 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 138543 ']' 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:11.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:11.040 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.040 [2024-11-19 16:16:01.269387] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:11.040 [2024-11-19 16:16:01.269474] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:11.040 [2024-11-19 16:16:01.341336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:11.299 [2024-11-19 16:16:01.394367] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:11.299 [2024-11-19 16:16:01.394435] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:11.299 [2024-11-19 16:16:01.394448] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:11.299 [2024-11-19 16:16:01.394459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:11.299 [2024-11-19 16:16:01.394468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:11.299 [2024-11-19 16:16:01.396169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:11.299 [2024-11-19 16:16:01.396196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:11.299 [2024-11-19 16:16:01.396247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:11.299 [2024-11-19 16:16:01.396250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.299 [2024-11-19 16:16:01.617703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.299 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.558 Malloc0 00:10:11.558 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.558 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:11.558 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.558 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.558 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.558 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:11.558 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:11.559 [2024-11-19 16:16:01.670739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=138565 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=138567 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=138569 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:11.559 { 00:10:11.559 "params": { 00:10:11.559 "name": "Nvme$subsystem", 00:10:11.559 "trtype": "$TEST_TRANSPORT", 00:10:11.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.559 "adrfam": "ipv4", 00:10:11.559 "trsvcid": "$NVMF_PORT", 00:10:11.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.559 "hdgst": ${hdgst:-false}, 00:10:11.559 "ddgst": ${ddgst:-false} 00:10:11.559 }, 00:10:11.559 "method": "bdev_nvme_attach_controller" 00:10:11.559 } 00:10:11.559 EOF 00:10:11.559 )") 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=138571 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:11.559 { 00:10:11.559 "params": { 00:10:11.559 "name": "Nvme$subsystem", 00:10:11.559 "trtype": "$TEST_TRANSPORT", 00:10:11.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.559 "adrfam": "ipv4", 00:10:11.559 "trsvcid": "$NVMF_PORT", 00:10:11.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.559 "hdgst": ${hdgst:-false}, 00:10:11.559 "ddgst": ${ddgst:-false} 00:10:11.559 }, 00:10:11.559 "method": "bdev_nvme_attach_controller" 00:10:11.559 } 00:10:11.559 EOF 00:10:11.559 )") 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:11.559 { 00:10:11.559 "params": { 00:10:11.559 "name": "Nvme$subsystem", 00:10:11.559 "trtype": "$TEST_TRANSPORT", 00:10:11.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.559 "adrfam": "ipv4", 00:10:11.559 "trsvcid": "$NVMF_PORT", 00:10:11.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.559 "hdgst": ${hdgst:-false}, 00:10:11.559 "ddgst": ${ddgst:-false} 00:10:11.559 }, 00:10:11.559 "method": "bdev_nvme_attach_controller" 00:10:11.559 } 00:10:11.559 EOF 00:10:11.559 )") 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:11.559 { 00:10:11.559 "params": { 00:10:11.559 "name": "Nvme$subsystem", 00:10:11.559 "trtype": "$TEST_TRANSPORT", 00:10:11.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:11.559 "adrfam": "ipv4", 00:10:11.559 "trsvcid": "$NVMF_PORT", 00:10:11.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:11.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:11.559 "hdgst": ${hdgst:-false}, 00:10:11.559 "ddgst": ${ddgst:-false} 00:10:11.559 }, 00:10:11.559 "method": "bdev_nvme_attach_controller" 00:10:11.559 } 00:10:11.559 EOF 00:10:11.559 )") 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 138565 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:11.559 "params": { 00:10:11.559 "name": "Nvme1", 00:10:11.559 "trtype": "tcp", 00:10:11.559 "traddr": "10.0.0.2", 00:10:11.559 "adrfam": "ipv4", 00:10:11.559 "trsvcid": "4420", 00:10:11.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.559 "hdgst": false, 00:10:11.559 "ddgst": false 00:10:11.559 }, 00:10:11.559 "method": "bdev_nvme_attach_controller" 00:10:11.559 }' 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:11.559 "params": { 00:10:11.559 "name": "Nvme1", 00:10:11.559 "trtype": "tcp", 00:10:11.559 "traddr": "10.0.0.2", 00:10:11.559 "adrfam": "ipv4", 00:10:11.559 "trsvcid": "4420", 00:10:11.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.559 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.559 "hdgst": false, 00:10:11.559 "ddgst": false 00:10:11.559 }, 00:10:11.559 "method": "bdev_nvme_attach_controller" 00:10:11.559 }' 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:11.559 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:11.559 "params": { 00:10:11.559 "name": "Nvme1", 00:10:11.559 "trtype": "tcp", 00:10:11.559 "traddr": "10.0.0.2", 00:10:11.559 "adrfam": "ipv4", 00:10:11.559 "trsvcid": "4420", 00:10:11.559 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.560 "hdgst": false, 00:10:11.560 "ddgst": false 00:10:11.560 }, 00:10:11.560 "method": "bdev_nvme_attach_controller" 00:10:11.560 }' 00:10:11.560 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:10:11.560 16:16:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:11.560 "params": { 00:10:11.560 "name": "Nvme1", 00:10:11.560 "trtype": "tcp", 00:10:11.560 "traddr": "10.0.0.2", 00:10:11.560 "adrfam": "ipv4", 00:10:11.560 "trsvcid": "4420", 00:10:11.560 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:11.560 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:11.560 "hdgst": false, 00:10:11.560 "ddgst": false 00:10:11.560 }, 00:10:11.560 "method": "bdev_nvme_attach_controller" 00:10:11.560 }' 00:10:11.560 [2024-11-19 16:16:01.721156] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:11.560 [2024-11-19 16:16:01.721156] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:11.560 [2024-11-19 16:16:01.721167] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:11.560 [2024-11-19 16:16:01.721249] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 16:16:01.721248] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 16:16:01.721248] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:11.560 --proc-type=auto ] 00:10:11.560 --proc-type=auto ] 00:10:11.560 [2024-11-19 16:16:01.721648] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:11.560 [2024-11-19 16:16:01.721716] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:11.819 [2024-11-19 16:16:01.901276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.819 [2024-11-19 16:16:01.943586] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:11.819 [2024-11-19 16:16:02.001393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.819 [2024-11-19 16:16:02.043825] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:11.819 [2024-11-19 16:16:02.101466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.819 [2024-11-19 16:16:02.142491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:12.077 [2024-11-19 16:16:02.173571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:12.077 [2024-11-19 16:16:02.211405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:12.077 Running I/O for 1 seconds... 00:10:12.077 Running I/O for 1 seconds... 00:10:12.077 Running I/O for 1 seconds... 00:10:12.336 Running I/O for 1 seconds... 00:10:13.274 184696.00 IOPS, 721.47 MiB/s 00:10:13.274 Latency(us) 00:10:13.274 [2024-11-19T15:16:03.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.274 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:13.274 Nvme1n1 : 1.00 184324.48 720.02 0.00 0.00 690.63 309.48 1990.35 00:10:13.274 [2024-11-19T15:16:03.613Z] =================================================================================================================== 00:10:13.274 [2024-11-19T15:16:03.613Z] Total : 184324.48 720.02 0.00 0.00 690.63 309.48 1990.35 00:10:13.274 8640.00 IOPS, 33.75 MiB/s [2024-11-19T15:16:03.613Z] 9684.00 IOPS, 37.83 MiB/s 00:10:13.274 Latency(us) 00:10:13.274 [2024-11-19T15:16:03.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.274 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:13.274 Nvme1n1 : 1.01 9740.68 38.05 0.00 0.00 13082.74 6893.42 22913.33 00:10:13.274 [2024-11-19T15:16:03.613Z] =================================================================================================================== 00:10:13.274 [2024-11-19T15:16:03.613Z] Total : 9740.68 38.05 0.00 0.00 13082.74 6893.42 22913.33 00:10:13.274 00:10:13.274 Latency(us) 00:10:13.274 [2024-11-19T15:16:03.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.274 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:13.274 Nvme1n1 : 1.01 8681.54 33.91 0.00 0.00 14667.44 8786.68 25049.32 00:10:13.274 [2024-11-19T15:16:03.613Z] =================================================================================================================== 00:10:13.274 [2024-11-19T15:16:03.614Z] Total : 8681.54 33.91 0.00 0.00 14667.44 8786.68 25049.32 00:10:13.275 9270.00 IOPS, 36.21 MiB/s 00:10:13.275 Latency(us) 00:10:13.275 [2024-11-19T15:16:03.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:13.275 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:13.275 Nvme1n1 : 1.01 9343.03 36.50 0.00 0.00 13648.94 4903.06 25049.32 00:10:13.275 [2024-11-19T15:16:03.614Z] =================================================================================================================== 00:10:13.275 [2024-11-19T15:16:03.614Z] Total : 9343.03 36.50 0.00 0.00 13648.94 4903.06 25049.32 00:10:13.275 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 138567 00:10:13.275 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 138569 00:10:13.275 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 138571 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:13.536 rmmod nvme_tcp 00:10:13.536 rmmod nvme_fabrics 00:10:13.536 rmmod nvme_keyring 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 138543 ']' 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 138543 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 138543 ']' 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 138543 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 138543 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 138543' 00:10:13.536 killing process with pid 138543 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 138543 00:10:13.536 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 138543 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.800 16:16:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.707 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:15.707 00:10:15.707 real 0m7.281s 00:10:15.707 user 0m15.117s 00:10:15.707 sys 0m3.845s 00:10:15.707 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:15.707 16:16:05 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:15.707 ************************************ 00:10:15.707 END TEST nvmf_bdev_io_wait 00:10:15.707 ************************************ 00:10:15.707 16:16:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:15.707 16:16:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:15.707 16:16:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:15.707 16:16:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:15.707 ************************************ 00:10:15.707 START TEST nvmf_queue_depth 00:10:15.707 ************************************ 00:10:15.707 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:15.967 * Looking for test storage... 00:10:15.967 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.967 --rc genhtml_branch_coverage=1 00:10:15.967 --rc genhtml_function_coverage=1 00:10:15.967 --rc genhtml_legend=1 00:10:15.967 --rc geninfo_all_blocks=1 00:10:15.967 --rc geninfo_unexecuted_blocks=1 00:10:15.967 00:10:15.967 ' 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.967 --rc genhtml_branch_coverage=1 00:10:15.967 --rc genhtml_function_coverage=1 00:10:15.967 --rc genhtml_legend=1 00:10:15.967 --rc geninfo_all_blocks=1 00:10:15.967 --rc geninfo_unexecuted_blocks=1 00:10:15.967 00:10:15.967 ' 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.967 --rc genhtml_branch_coverage=1 00:10:15.967 --rc genhtml_function_coverage=1 00:10:15.967 --rc genhtml_legend=1 00:10:15.967 --rc geninfo_all_blocks=1 00:10:15.967 --rc geninfo_unexecuted_blocks=1 00:10:15.967 00:10:15.967 ' 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:15.967 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:15.967 --rc genhtml_branch_coverage=1 00:10:15.967 --rc genhtml_function_coverage=1 00:10:15.967 --rc genhtml_legend=1 00:10:15.967 --rc geninfo_all_blocks=1 00:10:15.967 --rc geninfo_unexecuted_blocks=1 00:10:15.967 00:10:15.967 ' 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:15.967 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:15.968 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:10:15.968 16:16:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.502 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.502 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.502 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.502 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.502 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.502 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.502 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.502 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.502 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.502 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:18.503 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:18.503 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:18.503 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:18.503 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:10:18.503 00:10:18.503 --- 10.0.0.2 ping statistics --- 00:10:18.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.503 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.154 ms 00:10:18.503 00:10:18.503 --- 10.0.0.1 ping statistics --- 00:10:18.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.503 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.503 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=140805 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 140805 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 140805 ']' 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.504 [2024-11-19 16:16:08.465707] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:18.504 [2024-11-19 16:16:08.465793] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.504 [2024-11-19 16:16:08.541218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.504 [2024-11-19 16:16:08.589100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.504 [2024-11-19 16:16:08.589161] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.504 [2024-11-19 16:16:08.589174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.504 [2024-11-19 16:16:08.589186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.504 [2024-11-19 16:16:08.589196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.504 [2024-11-19 16:16:08.589791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.504 [2024-11-19 16:16:08.734470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.504 Malloc0 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.504 [2024-11-19 16:16:08.782449] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=140827 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 140827 /var/tmp/bdevperf.sock 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 140827 ']' 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:18.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.504 16:16:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:18.504 [2024-11-19 16:16:08.826745] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:18.504 [2024-11-19 16:16:08.826806] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140827 ] 00:10:18.762 [2024-11-19 16:16:08.891137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.762 [2024-11-19 16:16:08.938134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.762 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.762 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:18.762 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:18.762 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.762 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:19.020 NVMe0n1 00:10:19.020 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.020 16:16:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:19.281 Running I/O for 10 seconds... 00:10:21.166 8192.00 IOPS, 32.00 MiB/s [2024-11-19T15:16:12.446Z] 8192.00 IOPS, 32.00 MiB/s [2024-11-19T15:16:13.387Z] 8366.67 IOPS, 32.68 MiB/s [2024-11-19T15:16:14.768Z] 8441.50 IOPS, 32.97 MiB/s [2024-11-19T15:16:15.712Z] 8401.40 IOPS, 32.82 MiB/s [2024-11-19T15:16:16.653Z] 8508.50 IOPS, 33.24 MiB/s [2024-11-19T15:16:17.593Z] 8491.29 IOPS, 33.17 MiB/s [2024-11-19T15:16:18.535Z] 8555.50 IOPS, 33.42 MiB/s [2024-11-19T15:16:19.476Z] 8533.78 IOPS, 33.34 MiB/s [2024-11-19T15:16:19.737Z] 8575.00 IOPS, 33.50 MiB/s 00:10:29.398 Latency(us) 00:10:29.398 [2024-11-19T15:16:19.737Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.398 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:29.398 Verification LBA range: start 0x0 length 0x4000 00:10:29.398 NVMe0n1 : 10.09 8590.51 33.56 0.00 0.00 118670.23 20680.25 69905.07 00:10:29.398 [2024-11-19T15:16:19.737Z] =================================================================================================================== 00:10:29.398 [2024-11-19T15:16:19.737Z] Total : 8590.51 33.56 0.00 0.00 118670.23 20680.25 69905.07 00:10:29.398 { 00:10:29.398 "results": [ 00:10:29.398 { 00:10:29.398 "job": "NVMe0n1", 00:10:29.398 "core_mask": "0x1", 00:10:29.398 "workload": "verify", 00:10:29.398 "status": "finished", 00:10:29.398 "verify_range": { 00:10:29.398 "start": 0, 00:10:29.398 "length": 16384 00:10:29.398 }, 00:10:29.398 "queue_depth": 1024, 00:10:29.398 "io_size": 4096, 00:10:29.398 "runtime": 10.093696, 00:10:29.398 "iops": 8590.510354185424, 00:10:29.398 "mibps": 33.55668107103681, 00:10:29.398 "io_failed": 0, 00:10:29.398 "io_timeout": 0, 00:10:29.398 "avg_latency_us": 118670.22575631842, 00:10:29.398 "min_latency_us": 20680.248888888887, 00:10:29.398 "max_latency_us": 69905.06666666667 00:10:29.398 } 00:10:29.398 ], 00:10:29.398 "core_count": 1 00:10:29.398 } 00:10:29.398 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 140827 00:10:29.398 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 140827 ']' 00:10:29.398 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 140827 00:10:29.398 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:29.398 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.398 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140827 00:10:29.398 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:29.398 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:29.398 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140827' 00:10:29.398 killing process with pid 140827 00:10:29.398 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 140827 00:10:29.398 Received shutdown signal, test time was about 10.000000 seconds 00:10:29.399 00:10:29.399 Latency(us) 00:10:29.399 [2024-11-19T15:16:19.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:29.399 [2024-11-19T15:16:19.738Z] =================================================================================================================== 00:10:29.399 [2024-11-19T15:16:19.738Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:29.399 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 140827 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:29.660 rmmod nvme_tcp 00:10:29.660 rmmod nvme_fabrics 00:10:29.660 rmmod nvme_keyring 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 140805 ']' 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 140805 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 140805 ']' 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 140805 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 140805 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 140805' 00:10:29.660 killing process with pid 140805 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 140805 00:10:29.660 16:16:19 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 140805 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:29.921 16:16:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.832 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:31.832 00:10:31.832 real 0m16.074s 00:10:31.832 user 0m21.777s 00:10:31.832 sys 0m3.450s 00:10:31.832 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:31.832 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:31.832 ************************************ 00:10:31.832 END TEST nvmf_queue_depth 00:10:31.832 ************************************ 00:10:31.832 16:16:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:31.832 16:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:31.832 16:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:31.832 16:16:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:31.832 ************************************ 00:10:31.832 START TEST nvmf_target_multipath 00:10:31.832 ************************************ 00:10:31.832 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:32.092 * Looking for test storage... 00:10:32.092 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:32.092 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:32.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.093 --rc genhtml_branch_coverage=1 00:10:32.093 --rc genhtml_function_coverage=1 00:10:32.093 --rc genhtml_legend=1 00:10:32.093 --rc geninfo_all_blocks=1 00:10:32.093 --rc geninfo_unexecuted_blocks=1 00:10:32.093 00:10:32.093 ' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:32.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.093 --rc genhtml_branch_coverage=1 00:10:32.093 --rc genhtml_function_coverage=1 00:10:32.093 --rc genhtml_legend=1 00:10:32.093 --rc geninfo_all_blocks=1 00:10:32.093 --rc geninfo_unexecuted_blocks=1 00:10:32.093 00:10:32.093 ' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:32.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.093 --rc genhtml_branch_coverage=1 00:10:32.093 --rc genhtml_function_coverage=1 00:10:32.093 --rc genhtml_legend=1 00:10:32.093 --rc geninfo_all_blocks=1 00:10:32.093 --rc geninfo_unexecuted_blocks=1 00:10:32.093 00:10:32.093 ' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:32.093 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:32.093 --rc genhtml_branch_coverage=1 00:10:32.093 --rc genhtml_function_coverage=1 00:10:32.093 --rc genhtml_legend=1 00:10:32.093 --rc geninfo_all_blocks=1 00:10:32.093 --rc geninfo_unexecuted_blocks=1 00:10:32.093 00:10:32.093 ' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:32.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:10:32.093 16:16:22 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:34.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:34.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:34.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:34.682 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:34.683 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:34.683 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:34.683 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:10:34.683 00:10:34.683 --- 10.0.0.2 ping statistics --- 00:10:34.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.683 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:34.683 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:34.683 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:10:34.683 00:10:34.683 --- 10.0.0.1 ping statistics --- 00:10:34.683 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:34.683 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:34.683 only one NIC for nvmf test 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:34.683 rmmod nvme_tcp 00:10:34.683 rmmod nvme_fabrics 00:10:34.683 rmmod nvme_keyring 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:34.683 16:16:24 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:36.646 00:10:36.646 real 0m4.606s 00:10:36.646 user 0m0.960s 00:10:36.646 sys 0m1.662s 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:36.646 ************************************ 00:10:36.646 END TEST nvmf_target_multipath 00:10:36.646 ************************************ 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:36.646 16:16:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:36.646 ************************************ 00:10:36.646 START TEST nvmf_zcopy 00:10:36.646 ************************************ 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:36.647 * Looking for test storage... 00:10:36.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:36.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.647 --rc genhtml_branch_coverage=1 00:10:36.647 --rc genhtml_function_coverage=1 00:10:36.647 --rc genhtml_legend=1 00:10:36.647 --rc geninfo_all_blocks=1 00:10:36.647 --rc geninfo_unexecuted_blocks=1 00:10:36.647 00:10:36.647 ' 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:36.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.647 --rc genhtml_branch_coverage=1 00:10:36.647 --rc genhtml_function_coverage=1 00:10:36.647 --rc genhtml_legend=1 00:10:36.647 --rc geninfo_all_blocks=1 00:10:36.647 --rc geninfo_unexecuted_blocks=1 00:10:36.647 00:10:36.647 ' 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:36.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.647 --rc genhtml_branch_coverage=1 00:10:36.647 --rc genhtml_function_coverage=1 00:10:36.647 --rc genhtml_legend=1 00:10:36.647 --rc geninfo_all_blocks=1 00:10:36.647 --rc geninfo_unexecuted_blocks=1 00:10:36.647 00:10:36.647 ' 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:36.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:36.647 --rc genhtml_branch_coverage=1 00:10:36.647 --rc genhtml_function_coverage=1 00:10:36.647 --rc genhtml_legend=1 00:10:36.647 --rc geninfo_all_blocks=1 00:10:36.647 --rc geninfo_unexecuted_blocks=1 00:10:36.647 00:10:36.647 ' 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:36.647 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:36.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:10:36.923 16:16:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:38.878 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:38.878 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:38.878 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:38.879 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:38.879 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:38.879 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:39.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:10:39.155 00:10:39.155 --- 10.0.0.2 ping statistics --- 00:10:39.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.155 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.102 ms 00:10:39.155 00:10:39.155 --- 10.0.0.1 ping statistics --- 00:10:39.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.155 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:39.155 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=146051 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 146051 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 146051 ']' 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:39.156 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.156 [2024-11-19 16:16:29.412180] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:39.156 [2024-11-19 16:16:29.412273] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.460 [2024-11-19 16:16:29.488940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.461 [2024-11-19 16:16:29.537326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.461 [2024-11-19 16:16:29.537396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.461 [2024-11-19 16:16:29.537410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.461 [2024-11-19 16:16:29.537422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.461 [2024-11-19 16:16:29.537432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.461 [2024-11-19 16:16:29.538026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 [2024-11-19 16:16:29.675318] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 [2024-11-19 16:16:29.691543] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 malloc0 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:39.461 { 00:10:39.461 "params": { 00:10:39.461 "name": "Nvme$subsystem", 00:10:39.461 "trtype": "$TEST_TRANSPORT", 00:10:39.461 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:39.461 "adrfam": "ipv4", 00:10:39.461 "trsvcid": "$NVMF_PORT", 00:10:39.461 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:39.461 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:39.461 "hdgst": ${hdgst:-false}, 00:10:39.461 "ddgst": ${ddgst:-false} 00:10:39.461 }, 00:10:39.461 "method": "bdev_nvme_attach_controller" 00:10:39.461 } 00:10:39.461 EOF 00:10:39.461 )") 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:39.461 16:16:29 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:39.461 "params": { 00:10:39.461 "name": "Nvme1", 00:10:39.461 "trtype": "tcp", 00:10:39.461 "traddr": "10.0.0.2", 00:10:39.461 "adrfam": "ipv4", 00:10:39.461 "trsvcid": "4420", 00:10:39.461 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:39.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:39.461 "hdgst": false, 00:10:39.461 "ddgst": false 00:10:39.461 }, 00:10:39.461 "method": "bdev_nvme_attach_controller" 00:10:39.461 }' 00:10:39.461 [2024-11-19 16:16:29.771395] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:39.461 [2024-11-19 16:16:29.771484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid146200 ] 00:10:39.745 [2024-11-19 16:16:29.840921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:39.745 [2024-11-19 16:16:29.889676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.012 Running I/O for 10 seconds... 00:10:42.025 5606.00 IOPS, 43.80 MiB/s [2024-11-19T15:16:33.359Z] 5640.50 IOPS, 44.07 MiB/s [2024-11-19T15:16:34.375Z] 5669.33 IOPS, 44.29 MiB/s [2024-11-19T15:16:35.378Z] 5662.25 IOPS, 44.24 MiB/s [2024-11-19T15:16:36.375Z] 5679.40 IOPS, 44.37 MiB/s [2024-11-19T15:16:37.314Z] 5683.00 IOPS, 44.40 MiB/s [2024-11-19T15:16:38.695Z] 5690.71 IOPS, 44.46 MiB/s [2024-11-19T15:16:39.264Z] 5689.62 IOPS, 44.45 MiB/s [2024-11-19T15:16:40.650Z] 5694.56 IOPS, 44.49 MiB/s [2024-11-19T15:16:40.650Z] 5692.00 IOPS, 44.47 MiB/s 00:10:50.311 Latency(us) 00:10:50.311 [2024-11-19T15:16:40.650Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.311 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:50.311 Verification LBA range: start 0x0 length 0x1000 00:10:50.311 Nvme1n1 : 10.02 5695.57 44.50 0.00 0.00 22413.88 3543.80 30874.74 00:10:50.311 [2024-11-19T15:16:40.650Z] =================================================================================================================== 00:10:50.311 [2024-11-19T15:16:40.650Z] Total : 5695.57 44.50 0.00 0.00 22413.88 3543.80 30874.74 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=147431 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:50.311 { 00:10:50.311 "params": { 00:10:50.311 "name": "Nvme$subsystem", 00:10:50.311 "trtype": "$TEST_TRANSPORT", 00:10:50.311 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:50.311 "adrfam": "ipv4", 00:10:50.311 "trsvcid": "$NVMF_PORT", 00:10:50.311 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:50.311 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:50.311 "hdgst": ${hdgst:-false}, 00:10:50.311 "ddgst": ${ddgst:-false} 00:10:50.311 }, 00:10:50.311 "method": "bdev_nvme_attach_controller" 00:10:50.311 } 00:10:50.311 EOF 00:10:50.311 )") 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:10:50.311 [2024-11-19 16:16:40.480691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.311 [2024-11-19 16:16:40.480743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:10:50.311 16:16:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:50.311 "params": { 00:10:50.311 "name": "Nvme1", 00:10:50.311 "trtype": "tcp", 00:10:50.311 "traddr": "10.0.0.2", 00:10:50.311 "adrfam": "ipv4", 00:10:50.311 "trsvcid": "4420", 00:10:50.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:50.311 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:50.311 "hdgst": false, 00:10:50.311 "ddgst": false 00:10:50.311 }, 00:10:50.311 "method": "bdev_nvme_attach_controller" 00:10:50.311 }' 00:10:50.311 [2024-11-19 16:16:40.488646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.311 [2024-11-19 16:16:40.488669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.311 [2024-11-19 16:16:40.496668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.496688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.504687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.504708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.512708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.512727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.519303] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:10:50.312 [2024-11-19 16:16:40.519396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147431 ] 00:10:50.312 [2024-11-19 16:16:40.520730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.520751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.528754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.528775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.536772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.536792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.544794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.544813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.552816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.552836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.560838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.560857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.568860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.568881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.576881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.576901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.584903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.584923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.586234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:50.312 [2024-11-19 16:16:40.592951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.592990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.600980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.601020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.608972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.608994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.616991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.617012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.625011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.625032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.633034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.633075] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.312 [2024-11-19 16:16:40.633657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.312 [2024-11-19 16:16:40.641082] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.312 [2024-11-19 16:16:40.641104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.649125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.649155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.657177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.657225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.665172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.665211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.673189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.673228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.681227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.681266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.689234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.689276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.697261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.697301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.705249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.705274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.713295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.713332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.721319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.721372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.729324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.729369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.737328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.737363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.745351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.745386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.753398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.753436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.761415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.761439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.769434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.769457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.777460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.777483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.785463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.785484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.793485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.793505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.801509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.801530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.809530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.809550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.817589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.817610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.825579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.825601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.833603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.833626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.841623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.841645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.849644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.849665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.893309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.893338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 [2024-11-19 16:16:40.897792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.897814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.571 Running I/O for 5 seconds... 00:10:50.571 [2024-11-19 16:16:40.905821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.571 [2024-11-19 16:16:40.905860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:40.917639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:40.917667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:40.928281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:40.928317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:40.941079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:40.941109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:40.952575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:40.952603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:40.964739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:40.964766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:40.976181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:40.976210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:40.987615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:40.987642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:40.999400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:40.999442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.011219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:41.011250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.025095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:41.025138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.036502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:41.036530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.048062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:41.048099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.060421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:41.060448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.072632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:41.072659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.084455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:41.084482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.096194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:41.096237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.107757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:41.107785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.119910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.830 [2024-11-19 16:16:41.119939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.830 [2024-11-19 16:16:41.131846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.831 [2024-11-19 16:16:41.131874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.831 [2024-11-19 16:16:41.143488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.831 [2024-11-19 16:16:41.143515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:50.831 [2024-11-19 16:16:41.156604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:50.831 [2024-11-19 16:16:41.156641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.168333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.168363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.179843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.179871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.193602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.193630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.205093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.205145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.217021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.217049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.228802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.228829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.240551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.240578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.252259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.252288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.264407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.264435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.275816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.275844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.287586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.287613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.299186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.299230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.310885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.310913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.322796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.322822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.334332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.334377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.346482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.346510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.358602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.358631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.370421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.370448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.381856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.381894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.394098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.394142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.406005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.406032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.092 [2024-11-19 16:16:41.417710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.092 [2024-11-19 16:16:41.417737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.353 [2024-11-19 16:16:41.431210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.353 [2024-11-19 16:16:41.431239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.353 [2024-11-19 16:16:41.442154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.353 [2024-11-19 16:16:41.442182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.353 [2024-11-19 16:16:41.453609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.353 [2024-11-19 16:16:41.453636] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.353 [2024-11-19 16:16:41.465556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.353 [2024-11-19 16:16:41.465583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.353 [2024-11-19 16:16:41.477172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.353 [2024-11-19 16:16:41.477205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.353 [2024-11-19 16:16:41.488755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.353 [2024-11-19 16:16:41.488782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.353 [2024-11-19 16:16:41.499916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.353 [2024-11-19 16:16:41.499943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.353 [2024-11-19 16:16:41.511579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.353 [2024-11-19 16:16:41.511606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.353 [2024-11-19 16:16:41.522902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.522930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.535112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.535140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.547125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.547163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.559435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.559462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.571551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.571579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.583270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.583313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.595114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.595143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.608830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.608858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.619869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.619896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.631904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.631932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.643477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.643519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.655593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.655621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.667577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.667604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.354 [2024-11-19 16:16:41.679707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.354 [2024-11-19 16:16:41.679734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.691872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.691900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.703834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.703861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.715781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.715808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.727712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.727739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.739868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.739895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.751752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.751779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.764046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.764099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.776104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.776134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.787980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.788007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.799828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.799856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.811849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.811877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.823398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.823425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.834923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.834950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.846859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.846887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.858786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.858813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.870959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.870986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.882234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.882263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.893533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.893574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.905148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.905177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 10654.00 IOPS, 83.23 MiB/s [2024-11-19T15:16:41.954Z] [2024-11-19 16:16:41.917519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.917546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.929404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.929431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.615 [2024-11-19 16:16:41.941286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.615 [2024-11-19 16:16:41.941326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:41.953878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:41.953907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:41.965910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:41.965941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:41.977364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:41.977391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:41.989043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:41.989098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.000757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.000785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.012512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.012540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.024158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.024188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.035728] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.035755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.047392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.047435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.059125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.059154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.070804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.070831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.082723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.082751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.094517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.094544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.106397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.106439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.118126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.118169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.130174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.130204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.141709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.141736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.153968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.153996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.165512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.165539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.176803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.176831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.188462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.188489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.200431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.200458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:51.877 [2024-11-19 16:16:42.212374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:51.877 [2024-11-19 16:16:42.212417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.224215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.138 [2024-11-19 16:16:42.224258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.235707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.138 [2024-11-19 16:16:42.235734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.247494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.138 [2024-11-19 16:16:42.247520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.259216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.138 [2024-11-19 16:16:42.259245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.270815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.138 [2024-11-19 16:16:42.270851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.282504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.138 [2024-11-19 16:16:42.282545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.293481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.138 [2024-11-19 16:16:42.293516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.305231] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.138 [2024-11-19 16:16:42.305274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.317181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.138 [2024-11-19 16:16:42.317210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.329754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.138 [2024-11-19 16:16:42.329782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.138 [2024-11-19 16:16:42.340864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.340891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.352418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.352446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.363793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.363820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.375245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.375274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.386871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.386898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.400508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.400537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.411595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.411622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.423079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.423108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.435477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.435504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.447542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.447569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.458769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.458796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.139 [2024-11-19 16:16:42.470184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.139 [2024-11-19 16:16:42.470213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.481875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.481903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.493822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.493858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.505838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.505866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.517656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.517682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.529111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.529139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.540478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.540506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.553797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.553824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.564757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.564785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.576598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.576626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.588545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.588571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.599859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.599887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.611943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.611971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.623806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.623833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.635458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.635485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.647362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.647406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.659093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.659145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.672636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.672664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.683397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.683424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.695490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.695518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.707653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.707681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.719297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.719334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.399 [2024-11-19 16:16:42.730432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.399 [2024-11-19 16:16:42.730458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.742133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.742176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.753936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.753964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.765712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.765739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.777320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.777363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.789262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.789290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.801129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.801173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.812804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.812830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.824135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.824163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.837868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.837895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.849187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.849231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.860944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.860971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.872133] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.872162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.883906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.883933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.895370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.895398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.661 [2024-11-19 16:16:42.907270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.661 [2024-11-19 16:16:42.907299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.662 10752.00 IOPS, 84.00 MiB/s [2024-11-19T15:16:43.001Z] [2024-11-19 16:16:42.920203] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.662 [2024-11-19 16:16:42.920246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.662 [2024-11-19 16:16:42.931186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.662 [2024-11-19 16:16:42.931229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.662 [2024-11-19 16:16:42.943281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.662 [2024-11-19 16:16:42.943323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.662 [2024-11-19 16:16:42.954881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.662 [2024-11-19 16:16:42.954909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.662 [2024-11-19 16:16:42.966826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.662 [2024-11-19 16:16:42.966853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.662 [2024-11-19 16:16:42.978295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.662 [2024-11-19 16:16:42.978323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.662 [2024-11-19 16:16:42.990357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.662 [2024-11-19 16:16:42.990402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.002414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.002442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.014513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.014540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.026649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.026676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.038805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.038832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.051155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.051183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.062766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.062793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.074039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.074095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.086267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.086310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.097547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.097573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.109066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.109100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.120544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.120571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.131662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.131688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.143225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.143254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.155034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.155088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.168619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.168647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.178998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.179025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.191499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.191526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.203338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.203380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.215630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.215657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.227649] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.227677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.239453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.239481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:52.922 [2024-11-19 16:16:43.253301] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:52.922 [2024-11-19 16:16:43.253330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.264849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.264877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.276616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.276643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.288531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.288559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.300138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.300167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.311685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.311712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.323298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.323327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.334960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.334988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.346659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.346686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.358843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.358871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.370323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.370353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.381696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.381723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.393840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.393868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.405887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.405915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.418089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.418118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.430093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.430138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.442311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.442339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.454140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.454169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.466471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.466498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.478802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.478829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.490440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.490468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.502262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.502306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.181 [2024-11-19 16:16:43.513873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.181 [2024-11-19 16:16:43.513902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.526165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.526210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.537901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.537928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.549737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.549764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.561715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.561743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.573237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.573267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.585215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.585243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.597791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.597819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.610119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.610148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.621779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.621805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.633596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.633622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.645178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.645207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.656983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.439 [2024-11-19 16:16:43.657010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.439 [2024-11-19 16:16:43.668832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.440 [2024-11-19 16:16:43.668859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.440 [2024-11-19 16:16:43.680496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.440 [2024-11-19 16:16:43.680522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.440 [2024-11-19 16:16:43.692320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.440 [2024-11-19 16:16:43.692363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.440 [2024-11-19 16:16:43.703777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.440 [2024-11-19 16:16:43.703804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.440 [2024-11-19 16:16:43.715656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.440 [2024-11-19 16:16:43.715684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.440 [2024-11-19 16:16:43.727630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.440 [2024-11-19 16:16:43.727656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.440 [2024-11-19 16:16:43.739631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.440 [2024-11-19 16:16:43.739658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.440 [2024-11-19 16:16:43.753128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.440 [2024-11-19 16:16:43.753156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.440 [2024-11-19 16:16:43.763476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.440 [2024-11-19 16:16:43.763502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.440 [2024-11-19 16:16:43.775732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.440 [2024-11-19 16:16:43.775759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.787261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.787293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.799119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.799162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.811259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.811287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.823261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.823290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.835135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.835171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.848894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.848921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.860445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.860473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.871739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.871766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.882941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.882967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.894323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.894352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.906758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.906785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 10761.67 IOPS, 84.08 MiB/s [2024-11-19T15:16:44.038Z] [2024-11-19 16:16:43.919183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.919212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.930390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.930418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.942237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.942266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.954023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.954077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.965729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.965756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.977694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.977722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:43.989426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:43.989454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:44.001477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:44.001504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:44.013712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:44.013739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.699 [2024-11-19 16:16:44.025895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.699 [2024-11-19 16:16:44.025923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.957 [2024-11-19 16:16:44.037911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.957 [2024-11-19 16:16:44.037940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.957 [2024-11-19 16:16:44.049611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.957 [2024-11-19 16:16:44.049638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.957 [2024-11-19 16:16:44.061158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.957 [2024-11-19 16:16:44.061210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.957 [2024-11-19 16:16:44.073067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.957 [2024-11-19 16:16:44.073106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.957 [2024-11-19 16:16:44.085308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.957 [2024-11-19 16:16:44.085353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.957 [2024-11-19 16:16:44.096909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.957 [2024-11-19 16:16:44.096937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.957 [2024-11-19 16:16:44.110206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.110250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.121382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.121410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.133128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.133157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.144951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.144978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.158495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.158522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.169860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.169887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.181703] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.181730] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.193342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.193386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.204758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.204785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.216228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.216257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.227988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.228015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.240167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.240196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.251493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.251521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.263733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.263761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.275456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.275483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:53.958 [2024-11-19 16:16:44.286903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:53.958 [2024-11-19 16:16:44.286939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.216 [2024-11-19 16:16:44.298688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.298717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.310188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.310217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.321811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.321839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.333216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.333257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.344978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.345005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.357230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.357259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.368663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.368690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.380439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.380467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.392145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.392174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.403671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.403697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.415219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.415248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.426676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.426704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.438861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.438888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.451173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.451201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.462824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.462851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.474833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.474860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.486801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.486828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.498917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.498945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.510518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.510545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.522338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.522382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.533987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.534014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.217 [2024-11-19 16:16:44.545420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.217 [2024-11-19 16:16:44.545447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.557100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.557135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.569261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.569290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.582871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.582898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.593790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.593818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.605450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.605478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.616895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.616923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.630020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.630046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.641406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.641434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.652885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.652912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.664518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.664545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.676186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.676216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.688112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.688141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.700125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.477 [2024-11-19 16:16:44.700154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.477 [2024-11-19 16:16:44.711984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-11-19 16:16:44.712012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-11-19 16:16:44.723477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-11-19 16:16:44.723505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-11-19 16:16:44.735181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-11-19 16:16:44.735210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-11-19 16:16:44.746794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-11-19 16:16:44.746821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-11-19 16:16:44.759692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-11-19 16:16:44.759721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-11-19 16:16:44.771560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-11-19 16:16:44.771601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-11-19 16:16:44.784657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-11-19 16:16:44.784685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-11-19 16:16:44.795024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-11-19 16:16:44.795065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.478 [2024-11-19 16:16:44.806721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.478 [2024-11-19 16:16:44.806747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.818778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.818806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.830664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.830691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.842759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.842786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.854926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.854955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.866944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.866972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.879459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.879486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.891479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.891520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.903258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.903303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.915293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.915337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 10769.25 IOPS, 84.13 MiB/s [2024-11-19T15:16:45.077Z] [2024-11-19 16:16:44.926927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.926954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.938984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.939011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.950435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.738 [2024-11-19 16:16:44.950467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.738 [2024-11-19 16:16:44.962032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-11-19 16:16:44.962083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-11-19 16:16:44.973726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-11-19 16:16:44.973753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-11-19 16:16:44.985048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-11-19 16:16:44.985099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-11-19 16:16:44.996819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-11-19 16:16:44.996845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-11-19 16:16:45.008971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-11-19 16:16:45.008999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-11-19 16:16:45.020758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-11-19 16:16:45.020785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-11-19 16:16:45.033193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-11-19 16:16:45.033222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-11-19 16:16:45.045010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-11-19 16:16:45.045052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-11-19 16:16:45.056976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-11-19 16:16:45.057003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:54.739 [2024-11-19 16:16:45.068910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:54.739 [2024-11-19 16:16:45.068938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.080730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.080757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.092597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.092625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.105971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.105999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.116891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.116919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.128808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.128835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.141564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.141593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.153975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.154003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.166305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.166334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.178176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.178227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.189232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.189276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.200845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.200872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.212985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.213012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.224724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.224752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.236459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.236486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.248524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.248551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.260410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.260438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.272406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.272434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.284432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.284459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.295890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.295918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.307557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.307584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.319341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.319387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.000 [2024-11-19 16:16:45.331037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.000 [2024-11-19 16:16:45.331086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.260 [2024-11-19 16:16:45.343255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.260 [2024-11-19 16:16:45.343299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.355000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.355026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.366555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.366582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.380117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.380146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.391189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.391219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.402746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.402780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.414407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.414434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.425887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.425914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.437903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.437931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.449549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.449577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.461138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.461167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.473458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.473486] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.485476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.485504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.497788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.497815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.509417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.509445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.520630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.520658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.532829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.532857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.544720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.544747] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.556860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.556888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.568228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.568257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.579743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.579771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.261 [2024-11-19 16:16:45.591896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.261 [2024-11-19 16:16:45.591922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.603888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.603915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.615605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.615632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.627242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.627293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.639123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.639152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.651565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.651593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.663670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.663697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.676162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.676191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.688506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.688533] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.700729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.700757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.712490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.712518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.723880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.723907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.735847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.735874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.748042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.748095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.760560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.760587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.772791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.772818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.783934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.783961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.795796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.795824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.807414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.807442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.818921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.818950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.830214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.830258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.841521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.841549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.520 [2024-11-19 16:16:45.853230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.520 [2024-11-19 16:16:45.853266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 [2024-11-19 16:16:45.864890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.864918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 [2024-11-19 16:16:45.876827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.876864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 [2024-11-19 16:16:45.888852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.888881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 [2024-11-19 16:16:45.900586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.900628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 [2024-11-19 16:16:45.912854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.912881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 10764.20 IOPS, 84.10 MiB/s [2024-11-19T15:16:46.118Z] [2024-11-19 16:16:45.924550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.924577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 [2024-11-19 16:16:45.932004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.932029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 00:10:55.779 Latency(us) 00:10:55.779 [2024-11-19T15:16:46.118Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:55.779 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:55.779 Nvme1n1 : 5.01 10768.49 84.13 0.00 0.00 11870.50 4878.79 20291.89 00:10:55.779 [2024-11-19T15:16:46.118Z] =================================================================================================================== 00:10:55.779 [2024-11-19T15:16:46.118Z] Total : 10768.49 84.13 0.00 0.00 11870.50 4878.79 20291.89 00:10:55.779 [2024-11-19 16:16:45.940024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.940062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 [2024-11-19 16:16:45.948060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.948093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 [2024-11-19 16:16:45.956136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.956189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 [2024-11-19 16:16:45.964155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.779 [2024-11-19 16:16:45.964209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.779 [2024-11-19 16:16:45.972172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:45.972220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:45.980190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:45.980240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:45.988208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:45.988254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:45.996236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:45.996288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.004263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.004313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.012287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.012338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.020322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.020377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.028341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.028394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.036357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.036410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.044376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.044427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.052395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.052443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.060417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.060467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.076505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.076567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.084441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.084463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.092520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.092569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.100542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.100592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.108546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.108583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:55.780 [2024-11-19 16:16:46.116526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:55.780 [2024-11-19 16:16:46.116548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.041 [2024-11-19 16:16:46.124548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.041 [2024-11-19 16:16:46.124569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.041 [2024-11-19 16:16:46.132567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:56.041 [2024-11-19 16:16:46.132587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:56.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (147431) - No such process 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 147431 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:56.041 delay0 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.041 16:16:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:56.041 [2024-11-19 16:16:46.256955] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:02.622 Initializing NVMe Controllers 00:11:02.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:02.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:02.622 Initialization complete. Launching workers. 00:11:02.622 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 91 00:11:02.622 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 378, failed to submit 33 00:11:02.622 success 205, unsuccessful 173, failed 0 00:11:02.622 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:02.622 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:02.622 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:02.622 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:11:02.622 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:02.622 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:02.623 rmmod nvme_tcp 00:11:02.623 rmmod nvme_fabrics 00:11:02.623 rmmod nvme_keyring 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 146051 ']' 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 146051 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 146051 ']' 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 146051 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146051 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146051' 00:11:02.623 killing process with pid 146051 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 146051 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 146051 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:02.623 16:16:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.532 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:04.533 00:11:04.533 real 0m27.983s 00:11:04.533 user 0m41.344s 00:11:04.533 sys 0m7.829s 00:11:04.533 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:04.533 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:04.533 ************************************ 00:11:04.533 END TEST nvmf_zcopy 00:11:04.533 ************************************ 00:11:04.533 16:16:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:04.533 16:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:04.533 16:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:04.533 16:16:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:04.533 ************************************ 00:11:04.533 START TEST nvmf_nmic 00:11:04.533 ************************************ 00:11:04.533 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:04.792 * Looking for test storage... 00:11:04.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:04.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.792 --rc genhtml_branch_coverage=1 00:11:04.792 --rc genhtml_function_coverage=1 00:11:04.792 --rc genhtml_legend=1 00:11:04.792 --rc geninfo_all_blocks=1 00:11:04.792 --rc geninfo_unexecuted_blocks=1 00:11:04.792 00:11:04.792 ' 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:04.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.792 --rc genhtml_branch_coverage=1 00:11:04.792 --rc genhtml_function_coverage=1 00:11:04.792 --rc genhtml_legend=1 00:11:04.792 --rc geninfo_all_blocks=1 00:11:04.792 --rc geninfo_unexecuted_blocks=1 00:11:04.792 00:11:04.792 ' 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:04.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.792 --rc genhtml_branch_coverage=1 00:11:04.792 --rc genhtml_function_coverage=1 00:11:04.792 --rc genhtml_legend=1 00:11:04.792 --rc geninfo_all_blocks=1 00:11:04.792 --rc geninfo_unexecuted_blocks=1 00:11:04.792 00:11:04.792 ' 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:04.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.792 --rc genhtml_branch_coverage=1 00:11:04.792 --rc genhtml_function_coverage=1 00:11:04.792 --rc genhtml_legend=1 00:11:04.792 --rc geninfo_all_blocks=1 00:11:04.792 --rc geninfo_unexecuted_blocks=1 00:11:04.792 00:11:04.792 ' 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:04.792 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:04.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:04.793 16:16:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:11:04.793 16:16:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:07.333 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:07.333 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:07.333 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:07.333 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:07.334 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:07.334 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:07.334 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:11:07.334 00:11:07.334 --- 10.0.0.2 ping statistics --- 00:11:07.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.334 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:07.334 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:07.334 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:11:07.334 00:11:07.334 --- 10.0.0.1 ping statistics --- 00:11:07.334 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:07.334 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=150830 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 150830 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 150830 ']' 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 [2024-11-19 16:16:57.345366] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:11:07.334 [2024-11-19 16:16:57.345461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:07.334 [2024-11-19 16:16:57.418736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:07.334 [2024-11-19 16:16:57.465172] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:07.334 [2024-11-19 16:16:57.465224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:07.334 [2024-11-19 16:16:57.465244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:07.334 [2024-11-19 16:16:57.465255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:07.334 [2024-11-19 16:16:57.465269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:07.334 [2024-11-19 16:16:57.466692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.334 [2024-11-19 16:16:57.466802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.334 [2024-11-19 16:16:57.466900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:07.334 [2024-11-19 16:16:57.466908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.334 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.335 [2024-11-19 16:16:57.613149] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.335 Malloc0 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.335 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.595 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:07.595 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.595 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.596 [2024-11-19 16:16:57.674116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:07.596 test case1: single bdev can't be used in multiple subsystems 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.596 [2024-11-19 16:16:57.697898] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:07.596 [2024-11-19 16:16:57.697927] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:07.596 [2024-11-19 16:16:57.697941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:07.596 request: 00:11:07.596 { 00:11:07.596 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:07.596 "namespace": { 00:11:07.596 "bdev_name": "Malloc0", 00:11:07.596 "no_auto_visible": false 00:11:07.596 }, 00:11:07.596 "method": "nvmf_subsystem_add_ns", 00:11:07.596 "req_id": 1 00:11:07.596 } 00:11:07.596 Got JSON-RPC error response 00:11:07.596 response: 00:11:07.596 { 00:11:07.596 "code": -32602, 00:11:07.596 "message": "Invalid parameters" 00:11:07.596 } 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:07.596 Adding namespace failed - expected result. 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:07.596 test case2: host connect to nvmf target in multiple paths 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:07.596 [2024-11-19 16:16:57.706018] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:07.596 16:16:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:08.164 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:08.732 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:08.732 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:08.732 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:08.732 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:08.732 16:16:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:10.635 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:10.635 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:10.635 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.635 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:10.635 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.635 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:10.635 16:17:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:10.635 [global] 00:11:10.635 thread=1 00:11:10.635 invalidate=1 00:11:10.635 rw=write 00:11:10.635 time_based=1 00:11:10.635 runtime=1 00:11:10.635 ioengine=libaio 00:11:10.635 direct=1 00:11:10.635 bs=4096 00:11:10.635 iodepth=1 00:11:10.635 norandommap=0 00:11:10.635 numjobs=1 00:11:10.635 00:11:10.635 verify_dump=1 00:11:10.635 verify_backlog=512 00:11:10.635 verify_state_save=0 00:11:10.635 do_verify=1 00:11:10.635 verify=crc32c-intel 00:11:10.635 [job0] 00:11:10.635 filename=/dev/nvme0n1 00:11:10.635 Could not set queue depth (nvme0n1) 00:11:11.203 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:11.203 fio-3.35 00:11:11.203 Starting 1 thread 00:11:12.585 00:11:12.585 job0: (groupid=0, jobs=1): err= 0: pid=151354: Tue Nov 19 16:17:02 2024 00:11:12.585 read: IOPS=335, BW=1342KiB/s (1375kB/s)(1376KiB/1025msec) 00:11:12.585 slat (nsec): min=5603, max=34200, avg=9424.78, stdev=5714.82 00:11:12.585 clat (usec): min=187, max=42013, avg=2656.56, stdev=9708.66 00:11:12.585 lat (usec): min=203, max=42029, avg=2665.98, stdev=9712.59 00:11:12.585 clat percentiles (usec): 00:11:12.585 | 1.00th=[ 192], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 229], 00:11:12.585 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 253], 00:11:12.585 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 293], 95.00th=[41157], 00:11:12.585 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:12.585 | 99.99th=[42206] 00:11:12.585 write: IOPS=499, BW=1998KiB/s (2046kB/s)(2048KiB/1025msec); 0 zone resets 00:11:12.585 slat (usec): min=6, max=28201, avg=64.75, stdev=1245.93 00:11:12.585 clat (usec): min=122, max=271, avg=140.85, stdev=12.09 00:11:12.585 lat (usec): min=130, max=28391, avg=205.60, stdev=1248.17 00:11:12.585 clat percentiles (usec): 00:11:12.585 | 1.00th=[ 126], 5.00th=[ 127], 10.00th=[ 128], 20.00th=[ 131], 00:11:12.585 | 30.00th=[ 135], 40.00th=[ 137], 50.00th=[ 141], 60.00th=[ 145], 00:11:12.585 | 70.00th=[ 147], 80.00th=[ 149], 90.00th=[ 153], 95.00th=[ 159], 00:11:12.585 | 99.00th=[ 172], 99.50th=[ 182], 99.90th=[ 273], 99.95th=[ 273], 00:11:12.585 | 99.99th=[ 273] 00:11:12.585 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:11:12.585 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:12.585 lat (usec) : 250=81.54%, 500=16.12% 00:11:12.585 lat (msec) : 50=2.34% 00:11:12.585 cpu : usr=0.39%, sys=0.78%, ctx=858, majf=0, minf=1 00:11:12.585 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:12.585 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:12.586 issued rwts: total=344,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:12.586 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:12.586 00:11:12.586 Run status group 0 (all jobs): 00:11:12.586 READ: bw=1342KiB/s (1375kB/s), 1342KiB/s-1342KiB/s (1375kB/s-1375kB/s), io=1376KiB (1409kB), run=1025-1025msec 00:11:12.586 WRITE: bw=1998KiB/s (2046kB/s), 1998KiB/s-1998KiB/s (2046kB/s-2046kB/s), io=2048KiB (2097kB), run=1025-1025msec 00:11:12.586 00:11:12.586 Disk stats (read/write): 00:11:12.586 nvme0n1: ios=365/512, merge=0/0, ticks=1704/66, in_queue=1770, util=98.70% 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:12.586 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:12.586 rmmod nvme_tcp 00:11:12.586 rmmod nvme_fabrics 00:11:12.586 rmmod nvme_keyring 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 150830 ']' 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 150830 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 150830 ']' 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 150830 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 150830 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 150830' 00:11:12.586 killing process with pid 150830 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 150830 00:11:12.586 16:17:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 150830 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:12.847 16:17:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.392 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:15.392 00:11:15.392 real 0m10.290s 00:11:15.392 user 0m23.389s 00:11:15.392 sys 0m2.750s 00:11:15.392 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:15.392 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:15.392 ************************************ 00:11:15.392 END TEST nvmf_nmic 00:11:15.392 ************************************ 00:11:15.392 16:17:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:15.392 16:17:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.392 16:17:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.392 16:17:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:15.392 ************************************ 00:11:15.393 START TEST nvmf_fio_target 00:11:15.393 ************************************ 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:15.393 * Looking for test storage... 00:11:15.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:15.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.393 --rc genhtml_branch_coverage=1 00:11:15.393 --rc genhtml_function_coverage=1 00:11:15.393 --rc genhtml_legend=1 00:11:15.393 --rc geninfo_all_blocks=1 00:11:15.393 --rc geninfo_unexecuted_blocks=1 00:11:15.393 00:11:15.393 ' 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:15.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.393 --rc genhtml_branch_coverage=1 00:11:15.393 --rc genhtml_function_coverage=1 00:11:15.393 --rc genhtml_legend=1 00:11:15.393 --rc geninfo_all_blocks=1 00:11:15.393 --rc geninfo_unexecuted_blocks=1 00:11:15.393 00:11:15.393 ' 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:15.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.393 --rc genhtml_branch_coverage=1 00:11:15.393 --rc genhtml_function_coverage=1 00:11:15.393 --rc genhtml_legend=1 00:11:15.393 --rc geninfo_all_blocks=1 00:11:15.393 --rc geninfo_unexecuted_blocks=1 00:11:15.393 00:11:15.393 ' 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:15.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:15.393 --rc genhtml_branch_coverage=1 00:11:15.393 --rc genhtml_function_coverage=1 00:11:15.393 --rc genhtml_legend=1 00:11:15.393 --rc geninfo_all_blocks=1 00:11:15.393 --rc geninfo_unexecuted_blocks=1 00:11:15.393 00:11:15.393 ' 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:15.393 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:15.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:11:15.394 16:17:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:17.302 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:17.302 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:17.302 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:17.302 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:17.303 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:17.303 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:17.562 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:17.562 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:11:17.562 00:11:17.562 --- 10.0.0.2 ping statistics --- 00:11:17.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.562 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:17.562 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:17.562 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:11:17.562 00:11:17.562 --- 10.0.0.1 ping statistics --- 00:11:17.562 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:17.562 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:17.562 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=153561 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 153561 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 153561 ']' 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:17.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:17.563 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.563 [2024-11-19 16:17:07.728557] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:11:17.563 [2024-11-19 16:17:07.728630] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:17.563 [2024-11-19 16:17:07.798617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:17.563 [2024-11-19 16:17:07.842149] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:17.563 [2024-11-19 16:17:07.842206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:17.563 [2024-11-19 16:17:07.842234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:17.563 [2024-11-19 16:17:07.842245] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:17.563 [2024-11-19 16:17:07.842253] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:17.563 [2024-11-19 16:17:07.843791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:17.563 [2024-11-19 16:17:07.843900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:17.563 [2024-11-19 16:17:07.844030] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:17.563 [2024-11-19 16:17:07.844037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:17.822 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:17.822 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:17.822 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:17.822 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:17.822 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:17.822 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:17.822 16:17:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:18.081 [2024-11-19 16:17:08.227793] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.081 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:18.339 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:18.339 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:18.598 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:18.598 16:17:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:19.167 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:19.167 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:19.167 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:19.167 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:19.736 16:17:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:19.994 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:19.994 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.252 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:20.252 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:20.511 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:20.511 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:20.769 16:17:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:21.027 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:21.027 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.286 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:21.286 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:21.544 16:17:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.803 [2024-11-19 16:17:12.017686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.803 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:22.062 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:22.322 16:17:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.889 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:22.889 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:11:22.889 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.889 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:11:22.890 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:11:22.890 16:17:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:11:25.429 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:25.429 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:25.429 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:25.429 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:11:25.429 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:25.429 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:11:25.429 16:17:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:25.429 [global] 00:11:25.429 thread=1 00:11:25.429 invalidate=1 00:11:25.429 rw=write 00:11:25.429 time_based=1 00:11:25.429 runtime=1 00:11:25.429 ioengine=libaio 00:11:25.429 direct=1 00:11:25.429 bs=4096 00:11:25.429 iodepth=1 00:11:25.429 norandommap=0 00:11:25.429 numjobs=1 00:11:25.429 00:11:25.429 verify_dump=1 00:11:25.429 verify_backlog=512 00:11:25.429 verify_state_save=0 00:11:25.429 do_verify=1 00:11:25.429 verify=crc32c-intel 00:11:25.429 [job0] 00:11:25.429 filename=/dev/nvme0n1 00:11:25.429 [job1] 00:11:25.429 filename=/dev/nvme0n2 00:11:25.429 [job2] 00:11:25.429 filename=/dev/nvme0n3 00:11:25.429 [job3] 00:11:25.429 filename=/dev/nvme0n4 00:11:25.429 Could not set queue depth (nvme0n1) 00:11:25.429 Could not set queue depth (nvme0n2) 00:11:25.429 Could not set queue depth (nvme0n3) 00:11:25.429 Could not set queue depth (nvme0n4) 00:11:25.429 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.429 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.429 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.429 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:25.429 fio-3.35 00:11:25.429 Starting 4 threads 00:11:26.369 00:11:26.369 job0: (groupid=0, jobs=1): err= 0: pid=154646: Tue Nov 19 16:17:16 2024 00:11:26.369 read: IOPS=569, BW=2280KiB/s (2334kB/s)(2364KiB/1037msec) 00:11:26.369 slat (nsec): min=4165, max=35378, avg=10322.39, stdev=7794.97 00:11:26.369 clat (usec): min=176, max=42985, avg=1414.96, stdev=6859.36 00:11:26.369 lat (usec): min=181, max=43003, avg=1425.28, stdev=6861.92 00:11:26.369 clat percentiles (usec): 00:11:26.369 | 1.00th=[ 184], 5.00th=[ 190], 10.00th=[ 196], 20.00th=[ 202], 00:11:26.369 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:11:26.369 | 70.00th=[ 235], 80.00th=[ 251], 90.00th=[ 351], 95.00th=[ 412], 00:11:26.369 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:11:26.369 | 99.99th=[42730] 00:11:26.369 write: IOPS=987, BW=3950KiB/s (4045kB/s)(4096KiB/1037msec); 0 zone resets 00:11:26.369 slat (nsec): min=5626, max=38511, avg=9566.41, stdev=5341.08 00:11:26.369 clat (usec): min=124, max=389, avg=175.71, stdev=49.73 00:11:26.369 lat (usec): min=131, max=425, avg=185.28, stdev=51.75 00:11:26.369 clat percentiles (usec): 00:11:26.369 | 1.00th=[ 127], 5.00th=[ 131], 10.00th=[ 133], 20.00th=[ 137], 00:11:26.369 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 151], 60.00th=[ 163], 00:11:26.369 | 70.00th=[ 188], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 269], 00:11:26.369 | 99.00th=[ 297], 99.50th=[ 318], 99.90th=[ 338], 99.95th=[ 392], 00:11:26.369 | 99.99th=[ 392] 00:11:26.369 bw ( KiB/s): min= 4096, max= 4096, per=38.80%, avg=4096.00, stdev= 0.00, samples=2 00:11:26.369 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:11:26.369 lat (usec) : 250=84.95%, 500=13.93%, 750=0.06% 00:11:26.369 lat (msec) : 50=1.05% 00:11:26.369 cpu : usr=0.68%, sys=1.83%, ctx=1615, majf=0, minf=1 00:11:26.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.369 issued rwts: total=591,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.369 job1: (groupid=0, jobs=1): err= 0: pid=154647: Tue Nov 19 16:17:16 2024 00:11:26.369 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:11:26.369 slat (nsec): min=12167, max=33593, avg=24475.50, stdev=9215.55 00:11:26.369 clat (usec): min=40475, max=41085, avg=40948.32, stdev=114.66 00:11:26.369 lat (usec): min=40500, max=41110, avg=40972.80, stdev=113.73 00:11:26.369 clat percentiles (usec): 00:11:26.369 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:26.369 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:26.369 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:26.369 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:26.369 | 99.99th=[41157] 00:11:26.369 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:11:26.369 slat (nsec): min=6292, max=39483, avg=12569.56, stdev=7798.12 00:11:26.369 clat (usec): min=145, max=276, avg=193.47, stdev=30.75 00:11:26.369 lat (usec): min=154, max=299, avg=206.04, stdev=31.36 00:11:26.369 clat percentiles (usec): 00:11:26.369 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 167], 00:11:26.369 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 186], 60.00th=[ 192], 00:11:26.369 | 70.00th=[ 200], 80.00th=[ 221], 90.00th=[ 247], 95.00th=[ 255], 00:11:26.369 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 277], 99.95th=[ 277], 00:11:26.369 | 99.99th=[ 277] 00:11:26.369 bw ( KiB/s): min= 4096, max= 4096, per=38.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:26.369 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:26.369 lat (usec) : 250=87.45%, 500=8.43% 00:11:26.369 lat (msec) : 50=4.12% 00:11:26.369 cpu : usr=0.60%, sys=0.60%, ctx=534, majf=0, minf=1 00:11:26.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.369 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.369 job2: (groupid=0, jobs=1): err= 0: pid=154648: Tue Nov 19 16:17:16 2024 00:11:26.369 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:11:26.369 slat (nsec): min=6876, max=36939, avg=13861.04, stdev=5535.74 00:11:26.369 clat (usec): min=191, max=42195, avg=1653.91, stdev=7445.95 00:11:26.369 lat (usec): min=198, max=42202, avg=1667.77, stdev=7447.57 00:11:26.369 clat percentiles (usec): 00:11:26.369 | 1.00th=[ 196], 5.00th=[ 200], 10.00th=[ 204], 20.00th=[ 210], 00:11:26.369 | 30.00th=[ 217], 40.00th=[ 297], 50.00th=[ 302], 60.00th=[ 306], 00:11:26.369 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 343], 95.00th=[ 429], 00:11:26.369 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:26.369 | 99.99th=[42206] 00:11:26.369 write: IOPS=688, BW=2753KiB/s (2819kB/s)(2756KiB/1001msec); 0 zone resets 00:11:26.369 slat (nsec): min=7476, max=48914, avg=14520.19, stdev=7142.82 00:11:26.369 clat (usec): min=156, max=287, avg=190.67, stdev=17.90 00:11:26.369 lat (usec): min=166, max=322, avg=205.19, stdev=20.73 00:11:26.369 clat percentiles (usec): 00:11:26.369 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 178], 00:11:26.369 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:11:26.369 | 70.00th=[ 196], 80.00th=[ 202], 90.00th=[ 215], 95.00th=[ 227], 00:11:26.369 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 289], 99.95th=[ 289], 00:11:26.369 | 99.99th=[ 289] 00:11:26.369 bw ( KiB/s): min= 4096, max= 4096, per=38.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:26.369 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:26.369 lat (usec) : 250=72.94%, 500=25.65% 00:11:26.369 lat (msec) : 50=1.42% 00:11:26.369 cpu : usr=1.70%, sys=1.80%, ctx=1201, majf=0, minf=1 00:11:26.369 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.369 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.369 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.369 issued rwts: total=512,689,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.369 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.370 job3: (groupid=0, jobs=1): err= 0: pid=154649: Tue Nov 19 16:17:16 2024 00:11:26.370 read: IOPS=139, BW=560KiB/s (573kB/s)(580KiB/1036msec) 00:11:26.370 slat (nsec): min=7697, max=56398, avg=19885.68, stdev=6943.82 00:11:26.370 clat (usec): min=240, max=42320, avg=6297.50, stdev=14684.93 00:11:26.370 lat (usec): min=265, max=42339, avg=6317.38, stdev=14683.83 00:11:26.370 clat percentiles (usec): 00:11:26.370 | 1.00th=[ 249], 5.00th=[ 253], 10.00th=[ 258], 20.00th=[ 262], 00:11:26.370 | 30.00th=[ 265], 40.00th=[ 265], 50.00th=[ 273], 60.00th=[ 277], 00:11:26.370 | 70.00th=[ 285], 80.00th=[ 302], 90.00th=[41681], 95.00th=[42206], 00:11:26.370 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:26.370 | 99.99th=[42206] 00:11:26.370 write: IOPS=494, BW=1977KiB/s (2024kB/s)(2048KiB/1036msec); 0 zone resets 00:11:26.370 slat (nsec): min=7099, max=46210, avg=12974.23, stdev=6857.80 00:11:26.370 clat (usec): min=165, max=333, avg=215.39, stdev=30.91 00:11:26.370 lat (usec): min=173, max=343, avg=228.36, stdev=32.93 00:11:26.370 clat percentiles (usec): 00:11:26.370 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 180], 20.00th=[ 188], 00:11:26.370 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 210], 60.00th=[ 221], 00:11:26.370 | 70.00th=[ 233], 80.00th=[ 245], 90.00th=[ 260], 95.00th=[ 273], 00:11:26.370 | 99.00th=[ 289], 99.50th=[ 293], 99.90th=[ 334], 99.95th=[ 334], 00:11:26.370 | 99.99th=[ 334] 00:11:26.370 bw ( KiB/s): min= 4096, max= 4096, per=38.80%, avg=4096.00, stdev= 0.00, samples=1 00:11:26.370 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:26.370 lat (usec) : 250=65.91%, 500=30.90% 00:11:26.370 lat (msec) : 50=3.20% 00:11:26.370 cpu : usr=0.97%, sys=0.87%, ctx=657, majf=0, minf=1 00:11:26.370 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:26.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:26.370 issued rwts: total=145,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:26.370 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:26.370 00:11:26.370 Run status group 0 (all jobs): 00:11:26.370 READ: bw=4899KiB/s (5016kB/s), 87.2KiB/s-2280KiB/s (89.3kB/s-2334kB/s), io=5080KiB (5202kB), run=1001-1037msec 00:11:26.370 WRITE: bw=10.3MiB/s (10.8MB/s), 1977KiB/s-3950KiB/s (2024kB/s-4045kB/s), io=10.7MiB (11.2MB), run=1001-1037msec 00:11:26.370 00:11:26.370 Disk stats (read/write): 00:11:26.370 nvme0n1: ios=578/1024, merge=0/0, ticks=689/173, in_queue=862, util=86.67% 00:11:26.370 nvme0n2: ios=33/512, merge=0/0, ticks=748/100, in_queue=848, util=86.65% 00:11:26.370 nvme0n3: ios=202/512, merge=0/0, ticks=710/89, in_queue=799, util=88.98% 00:11:26.370 nvme0n4: ios=134/512, merge=0/0, ticks=702/101, in_queue=803, util=89.64% 00:11:26.370 16:17:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:26.370 [global] 00:11:26.370 thread=1 00:11:26.370 invalidate=1 00:11:26.370 rw=randwrite 00:11:26.370 time_based=1 00:11:26.370 runtime=1 00:11:26.370 ioengine=libaio 00:11:26.370 direct=1 00:11:26.370 bs=4096 00:11:26.370 iodepth=1 00:11:26.370 norandommap=0 00:11:26.370 numjobs=1 00:11:26.370 00:11:26.370 verify_dump=1 00:11:26.370 verify_backlog=512 00:11:26.370 verify_state_save=0 00:11:26.370 do_verify=1 00:11:26.370 verify=crc32c-intel 00:11:26.370 [job0] 00:11:26.370 filename=/dev/nvme0n1 00:11:26.370 [job1] 00:11:26.370 filename=/dev/nvme0n2 00:11:26.370 [job2] 00:11:26.370 filename=/dev/nvme0n3 00:11:26.370 [job3] 00:11:26.370 filename=/dev/nvme0n4 00:11:26.628 Could not set queue depth (nvme0n1) 00:11:26.628 Could not set queue depth (nvme0n2) 00:11:26.628 Could not set queue depth (nvme0n3) 00:11:26.628 Could not set queue depth (nvme0n4) 00:11:26.629 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.629 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.629 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.629 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:26.629 fio-3.35 00:11:26.629 Starting 4 threads 00:11:28.007 00:11:28.007 job0: (groupid=0, jobs=1): err= 0: pid=154878: Tue Nov 19 16:17:18 2024 00:11:28.007 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:11:28.007 slat (nsec): min=8877, max=16132, avg=12119.14, stdev=1972.26 00:11:28.007 clat (usec): min=40867, max=41104, avg=40982.65, stdev=52.26 00:11:28.007 lat (usec): min=40881, max=41117, avg=40994.77, stdev=51.94 00:11:28.007 clat percentiles (usec): 00:11:28.007 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:11:28.007 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:28.007 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:28.007 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:28.007 | 99.99th=[41157] 00:11:28.007 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:11:28.007 slat (nsec): min=8698, max=65400, avg=11606.96, stdev=4768.76 00:11:28.007 clat (usec): min=141, max=639, avg=239.80, stdev=38.74 00:11:28.007 lat (usec): min=151, max=660, avg=251.40, stdev=38.94 00:11:28.007 clat percentiles (usec): 00:11:28.007 | 1.00th=[ 153], 5.00th=[ 198], 10.00th=[ 210], 20.00th=[ 219], 00:11:28.007 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 235], 60.00th=[ 241], 00:11:28.007 | 70.00th=[ 247], 80.00th=[ 258], 90.00th=[ 285], 95.00th=[ 302], 00:11:28.007 | 99.00th=[ 367], 99.50th=[ 437], 99.90th=[ 644], 99.95th=[ 644], 00:11:28.007 | 99.99th=[ 644] 00:11:28.007 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:11:28.007 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:28.007 lat (usec) : 250=70.79%, 500=24.91%, 750=0.19% 00:11:28.007 lat (msec) : 50=4.12% 00:11:28.007 cpu : usr=0.39%, sys=0.48%, ctx=537, majf=0, minf=1 00:11:28.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.007 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.007 job1: (groupid=0, jobs=1): err= 0: pid=154879: Tue Nov 19 16:17:18 2024 00:11:28.007 read: IOPS=20, BW=82.5KiB/s (84.5kB/s)(84.0KiB/1018msec) 00:11:28.007 slat (nsec): min=12936, max=35093, avg=22945.48, stdev=9459.90 00:11:28.007 clat (usec): min=40955, max=42054, avg=41848.37, stdev=315.22 00:11:28.007 lat (usec): min=40989, max=42089, avg=41871.32, stdev=315.15 00:11:28.007 clat percentiles (usec): 00:11:28.007 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:11:28.007 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:28.007 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:28.007 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:28.007 | 99.99th=[42206] 00:11:28.007 write: IOPS=502, BW=2012KiB/s (2060kB/s)(2048KiB/1018msec); 0 zone resets 00:11:28.007 slat (nsec): min=7543, max=49627, avg=16987.74, stdev=7772.51 00:11:28.007 clat (usec): min=144, max=396, avg=247.85, stdev=42.39 00:11:28.007 lat (usec): min=152, max=406, avg=264.84, stdev=41.99 00:11:28.007 clat percentiles (usec): 00:11:28.007 | 1.00th=[ 161], 5.00th=[ 180], 10.00th=[ 200], 20.00th=[ 223], 00:11:28.007 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 251], 00:11:28.007 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 302], 95.00th=[ 334], 00:11:28.007 | 99.00th=[ 371], 99.50th=[ 388], 99.90th=[ 396], 99.95th=[ 396], 00:11:28.007 | 99.99th=[ 396] 00:11:28.007 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:11:28.007 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:28.007 lat (usec) : 250=56.47%, 500=39.59% 00:11:28.007 lat (msec) : 50=3.94% 00:11:28.007 cpu : usr=0.98%, sys=0.49%, ctx=534, majf=0, minf=1 00:11:28.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.007 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.007 job2: (groupid=0, jobs=1): err= 0: pid=154880: Tue Nov 19 16:17:18 2024 00:11:28.007 read: IOPS=2118, BW=8476KiB/s (8679kB/s)(8484KiB/1001msec) 00:11:28.007 slat (nsec): min=5254, max=51104, avg=10840.85, stdev=5789.08 00:11:28.007 clat (usec): min=165, max=989, avg=220.51, stdev=63.36 00:11:28.007 lat (usec): min=170, max=1005, avg=231.35, stdev=65.37 00:11:28.007 clat percentiles (usec): 00:11:28.007 | 1.00th=[ 172], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:11:28.007 | 30.00th=[ 192], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 208], 00:11:28.007 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 260], 95.00th=[ 404], 00:11:28.007 | 99.00th=[ 457], 99.50th=[ 490], 99.90th=[ 758], 99.95th=[ 881], 00:11:28.007 | 99.99th=[ 988] 00:11:28.007 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:11:28.007 slat (nsec): min=5708, max=61199, avg=13172.88, stdev=6315.44 00:11:28.007 clat (usec): min=121, max=2962, avg=179.49, stdev=81.41 00:11:28.007 lat (usec): min=128, max=2971, avg=192.66, stdev=82.57 00:11:28.007 clat percentiles (usec): 00:11:28.007 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:11:28.007 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 165], 00:11:28.007 | 70.00th=[ 190], 80.00th=[ 221], 90.00th=[ 247], 95.00th=[ 269], 00:11:28.007 | 99.00th=[ 338], 99.50th=[ 363], 99.90th=[ 469], 99.95th=[ 2089], 00:11:28.007 | 99.99th=[ 2966] 00:11:28.007 bw ( KiB/s): min= 8192, max= 8192, per=51.65%, avg=8192.00, stdev= 0.00, samples=1 00:11:28.007 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:28.007 lat (usec) : 250=89.79%, 500=9.96%, 750=0.13%, 1000=0.09% 00:11:28.007 lat (msec) : 4=0.04% 00:11:28.007 cpu : usr=3.50%, sys=6.90%, ctx=4681, majf=0, minf=1 00:11:28.007 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.007 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.007 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.007 issued rwts: total=2121,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.007 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.007 job3: (groupid=0, jobs=1): err= 0: pid=154881: Tue Nov 19 16:17:18 2024 00:11:28.007 read: IOPS=21, BW=86.8KiB/s (88.9kB/s)(88.0KiB/1014msec) 00:11:28.007 slat (nsec): min=9593, max=33940, avg=21708.59, stdev=9531.20 00:11:28.007 clat (usec): min=13016, max=42092, avg=39860.66, stdev=6012.71 00:11:28.007 lat (usec): min=13029, max=42107, avg=39882.37, stdev=6014.62 00:11:28.007 clat percentiles (usec): 00:11:28.007 | 1.00th=[13042], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:11:28.007 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:28.007 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:11:28.007 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:28.007 | 99.99th=[42206] 00:11:28.007 write: IOPS=504, BW=2020KiB/s (2068kB/s)(2048KiB/1014msec); 0 zone resets 00:11:28.007 slat (nsec): min=6883, max=47817, avg=16245.38, stdev=8083.33 00:11:28.007 clat (usec): min=166, max=561, avg=245.16, stdev=41.93 00:11:28.007 lat (usec): min=175, max=571, avg=261.41, stdev=40.44 00:11:28.007 clat percentiles (usec): 00:11:28.007 | 1.00th=[ 178], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 212], 00:11:28.008 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 247], 00:11:28.008 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 322], 00:11:28.008 | 99.00th=[ 375], 99.50th=[ 408], 99.90th=[ 562], 99.95th=[ 562], 00:11:28.008 | 99.99th=[ 562] 00:11:28.008 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:11:28.008 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:28.008 lat (usec) : 250=61.24%, 500=34.46%, 750=0.19% 00:11:28.008 lat (msec) : 20=0.19%, 50=3.93% 00:11:28.008 cpu : usr=0.89%, sys=0.69%, ctx=534, majf=0, minf=1 00:11:28.008 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:28.008 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.008 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:28.008 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:28.008 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:28.008 00:11:28.008 Run status group 0 (all jobs): 00:11:28.008 READ: bw=8465KiB/s (8668kB/s), 82.5KiB/s-8476KiB/s (84.5kB/s-8679kB/s), io=8744KiB (8954kB), run=1001-1033msec 00:11:28.008 WRITE: bw=15.5MiB/s (16.2MB/s), 1983KiB/s-9.99MiB/s (2030kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1033msec 00:11:28.008 00:11:28.008 Disk stats (read/write): 00:11:28.008 nvme0n1: ios=59/512, merge=0/0, ticks=1005/119, in_queue=1124, util=97.90% 00:11:28.008 nvme0n2: ios=56/512, merge=0/0, ticks=976/122, in_queue=1098, util=98.38% 00:11:28.008 nvme0n3: ios=1871/2048, merge=0/0, ticks=677/359, in_queue=1036, util=91.35% 00:11:28.008 nvme0n4: ios=18/512, merge=0/0, ticks=713/115, in_queue=828, util=89.72% 00:11:28.008 16:17:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:28.008 [global] 00:11:28.008 thread=1 00:11:28.008 invalidate=1 00:11:28.008 rw=write 00:11:28.008 time_based=1 00:11:28.008 runtime=1 00:11:28.008 ioengine=libaio 00:11:28.008 direct=1 00:11:28.008 bs=4096 00:11:28.008 iodepth=128 00:11:28.008 norandommap=0 00:11:28.008 numjobs=1 00:11:28.008 00:11:28.008 verify_dump=1 00:11:28.008 verify_backlog=512 00:11:28.008 verify_state_save=0 00:11:28.008 do_verify=1 00:11:28.008 verify=crc32c-intel 00:11:28.008 [job0] 00:11:28.008 filename=/dev/nvme0n1 00:11:28.008 [job1] 00:11:28.008 filename=/dev/nvme0n2 00:11:28.008 [job2] 00:11:28.008 filename=/dev/nvme0n3 00:11:28.008 [job3] 00:11:28.008 filename=/dev/nvme0n4 00:11:28.008 Could not set queue depth (nvme0n1) 00:11:28.008 Could not set queue depth (nvme0n2) 00:11:28.008 Could not set queue depth (nvme0n3) 00:11:28.008 Could not set queue depth (nvme0n4) 00:11:28.266 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.266 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.266 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.266 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:28.266 fio-3.35 00:11:28.266 Starting 4 threads 00:11:29.646 00:11:29.646 job0: (groupid=0, jobs=1): err= 0: pid=155113: Tue Nov 19 16:17:19 2024 00:11:29.646 read: IOPS=3555, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1008msec) 00:11:29.646 slat (usec): min=2, max=9327, avg=118.44, stdev=691.01 00:11:29.646 clat (usec): min=7077, max=47459, avg=15081.85, stdev=5894.75 00:11:29.646 lat (usec): min=7081, max=53940, avg=15200.30, stdev=5954.25 00:11:29.646 clat percentiles (usec): 00:11:29.646 | 1.00th=[ 7963], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10683], 00:11:29.646 | 30.00th=[10945], 40.00th=[11600], 50.00th=[12780], 60.00th=[17433], 00:11:29.646 | 70.00th=[17957], 80.00th=[18220], 90.00th=[21103], 95.00th=[25297], 00:11:29.646 | 99.00th=[39584], 99.50th=[46400], 99.90th=[47449], 99.95th=[47449], 00:11:29.646 | 99.99th=[47449] 00:11:29.646 write: IOPS=3810, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1008msec); 0 zone resets 00:11:29.646 slat (usec): min=2, max=9424, avg=143.52, stdev=661.23 00:11:29.646 clat (usec): min=6370, max=64929, avg=19129.77, stdev=12198.24 00:11:29.646 lat (usec): min=6388, max=64945, avg=19273.28, stdev=12282.13 00:11:29.646 clat percentiles (usec): 00:11:29.646 | 1.00th=[ 7111], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11469], 00:11:29.646 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12780], 60.00th=[16319], 00:11:29.646 | 70.00th=[22414], 80.00th=[24511], 90.00th=[38536], 95.00th=[49021], 00:11:29.646 | 99.00th=[59507], 99.50th=[61080], 99.90th=[62129], 99.95th=[62129], 00:11:29.646 | 99.99th=[64750] 00:11:29.646 bw ( KiB/s): min=11520, max=18192, per=21.78%, avg=14856.00, stdev=4717.82, samples=2 00:11:29.646 iops : min= 2880, max= 4548, avg=3714.00, stdev=1179.45, samples=2 00:11:29.646 lat (msec) : 10=5.74%, 20=72.23%, 50=19.60%, 100=2.44% 00:11:29.646 cpu : usr=3.57%, sys=3.77%, ctx=440, majf=0, minf=1 00:11:29.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:29.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.646 issued rwts: total=3584,3841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.646 job1: (groupid=0, jobs=1): err= 0: pid=155114: Tue Nov 19 16:17:19 2024 00:11:29.646 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:11:29.646 slat (usec): min=2, max=10675, avg=91.07, stdev=569.45 00:11:29.646 clat (usec): min=5510, max=30270, avg=11751.44, stdev=2863.32 00:11:29.646 lat (usec): min=5520, max=30272, avg=11842.50, stdev=2910.46 00:11:29.646 clat percentiles (usec): 00:11:29.646 | 1.00th=[ 6259], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10159], 00:11:29.646 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[11338], 00:11:29.646 | 70.00th=[11994], 80.00th=[14353], 90.00th=[15270], 95.00th=[17433], 00:11:29.646 | 99.00th=[22152], 99.50th=[25297], 99.90th=[28443], 99.95th=[28443], 00:11:29.646 | 99.99th=[30278] 00:11:29.646 write: IOPS=5607, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:11:29.646 slat (usec): min=2, max=6730, avg=88.91, stdev=464.20 00:11:29.646 clat (usec): min=629, max=25340, avg=11868.79, stdev=3081.61 00:11:29.646 lat (usec): min=4845, max=25344, avg=11957.71, stdev=3116.68 00:11:29.646 clat percentiles (usec): 00:11:29.646 | 1.00th=[ 5669], 5.00th=[ 8455], 10.00th=[ 9634], 20.00th=[10159], 00:11:29.646 | 30.00th=[10552], 40.00th=[10945], 50.00th=[11338], 60.00th=[11600], 00:11:29.646 | 70.00th=[11863], 80.00th=[12649], 90.00th=[14353], 95.00th=[19792], 00:11:29.646 | 99.00th=[22938], 99.50th=[22938], 99.90th=[23200], 99.95th=[23725], 00:11:29.646 | 99.99th=[25297] 00:11:29.646 bw ( KiB/s): min=20480, max=23488, per=32.24%, avg=21984.00, stdev=2126.98, samples=2 00:11:29.646 iops : min= 5120, max= 5872, avg=5496.00, stdev=531.74, samples=2 00:11:29.646 lat (usec) : 750=0.01% 00:11:29.646 lat (msec) : 10=16.25%, 20=80.17%, 50=3.57% 00:11:29.646 cpu : usr=4.69%, sys=5.79%, ctx=531, majf=0, minf=1 00:11:29.646 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:11:29.646 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.646 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.646 issued rwts: total=5120,5624,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.646 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.646 job2: (groupid=0, jobs=1): err= 0: pid=155115: Tue Nov 19 16:17:19 2024 00:11:29.646 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:11:29.646 slat (usec): min=3, max=12771, avg=118.48, stdev=808.80 00:11:29.646 clat (usec): min=5022, max=27102, avg=14818.28, stdev=3807.48 00:11:29.646 lat (usec): min=5029, max=27116, avg=14936.76, stdev=3849.05 00:11:29.646 clat percentiles (usec): 00:11:29.646 | 1.00th=[ 6063], 5.00th=[ 9896], 10.00th=[11600], 20.00th=[12649], 00:11:29.646 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13566], 60.00th=[13960], 00:11:29.646 | 70.00th=[15533], 80.00th=[17695], 90.00th=[20841], 95.00th=[23200], 00:11:29.646 | 99.00th=[25297], 99.50th=[26084], 99.90th=[27132], 99.95th=[27132], 00:11:29.646 | 99.99th=[27132] 00:11:29.646 write: IOPS=4620, BW=18.0MiB/s (18.9MB/s)(18.2MiB/1006msec); 0 zone resets 00:11:29.646 slat (usec): min=3, max=11115, avg=87.97, stdev=391.90 00:11:29.646 clat (usec): min=2838, max=27085, avg=12773.80, stdev=2795.66 00:11:29.647 lat (usec): min=2845, max=27102, avg=12861.77, stdev=2824.94 00:11:29.647 clat percentiles (usec): 00:11:29.647 | 1.00th=[ 4080], 5.00th=[ 6194], 10.00th=[ 7701], 20.00th=[11994], 00:11:29.647 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[13960], 00:11:29.647 | 70.00th=[14222], 80.00th=[14353], 90.00th=[14615], 95.00th=[14746], 00:11:29.647 | 99.00th=[15008], 99.50th=[15139], 99.90th=[26084], 99.95th=[26608], 00:11:29.647 | 99.99th=[27132] 00:11:29.647 bw ( KiB/s): min=17040, max=19824, per=27.03%, avg=18432.00, stdev=1968.59, samples=2 00:11:29.647 iops : min= 4260, max= 4956, avg=4608.00, stdev=492.15, samples=2 00:11:29.647 lat (msec) : 4=0.48%, 10=10.10%, 20=83.24%, 50=6.18% 00:11:29.647 cpu : usr=5.17%, sys=10.55%, ctx=590, majf=0, minf=1 00:11:29.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:29.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.647 issued rwts: total=4608,4648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.647 job3: (groupid=0, jobs=1): err= 0: pid=155116: Tue Nov 19 16:17:19 2024 00:11:29.647 read: IOPS=2594, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1008msec) 00:11:29.647 slat (usec): min=2, max=15508, avg=157.56, stdev=989.21 00:11:29.647 clat (usec): min=7248, max=49544, avg=18329.96, stdev=6236.48 00:11:29.647 lat (usec): min=8456, max=49549, avg=18487.53, stdev=6338.43 00:11:29.647 clat percentiles (usec): 00:11:29.647 | 1.00th=[10159], 5.00th=[12125], 10.00th=[13960], 20.00th=[14484], 00:11:29.647 | 30.00th=[15270], 40.00th=[15401], 50.00th=[15533], 60.00th=[15926], 00:11:29.647 | 70.00th=[18220], 80.00th=[23200], 90.00th=[28443], 95.00th=[30278], 00:11:29.647 | 99.00th=[41681], 99.50th=[48497], 99.90th=[49546], 99.95th=[49546], 00:11:29.647 | 99.99th=[49546] 00:11:29.647 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:11:29.647 slat (usec): min=3, max=6786, avg=184.22, stdev=688.83 00:11:29.647 clat (usec): min=8833, max=66001, avg=25896.00, stdev=9726.77 00:11:29.647 lat (usec): min=8838, max=66009, avg=26080.22, stdev=9793.23 00:11:29.647 clat percentiles (usec): 00:11:29.647 | 1.00th=[12649], 5.00th=[13698], 10.00th=[13960], 20.00th=[14615], 00:11:29.647 | 30.00th=[22676], 40.00th=[23462], 50.00th=[24773], 60.00th=[27657], 00:11:29.647 | 70.00th=[29754], 80.00th=[30802], 90.00th=[34866], 95.00th=[46400], 00:11:29.647 | 99.00th=[56361], 99.50th=[56886], 99.90th=[65799], 99.95th=[65799], 00:11:29.647 | 99.99th=[65799] 00:11:29.647 bw ( KiB/s): min=11712, max=12288, per=17.60%, avg=12000.00, stdev=407.29, samples=2 00:11:29.647 iops : min= 2928, max= 3072, avg=3000.00, stdev=101.82, samples=2 00:11:29.647 lat (msec) : 10=0.53%, 20=46.88%, 50=50.34%, 100=2.25% 00:11:29.647 cpu : usr=2.98%, sys=4.27%, ctx=397, majf=0, minf=1 00:11:29.647 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:29.647 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:29.647 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:29.647 issued rwts: total=2615,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:29.647 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:29.647 00:11:29.647 Run status group 0 (all jobs): 00:11:29.647 READ: bw=61.7MiB/s (64.7MB/s), 10.1MiB/s-19.9MiB/s (10.6MB/s-20.9MB/s), io=62.2MiB (65.2MB), run=1003-1008msec 00:11:29.647 WRITE: bw=66.6MiB/s (69.8MB/s), 11.9MiB/s-21.9MiB/s (12.5MB/s-23.0MB/s), io=67.1MiB (70.4MB), run=1003-1008msec 00:11:29.647 00:11:29.647 Disk stats (read/write): 00:11:29.647 nvme0n1: ios=3114/3072, merge=0/0, ticks=18487/22875, in_queue=41362, util=91.08% 00:11:29.647 nvme0n2: ios=4658/4787, merge=0/0, ticks=27126/26155, in_queue=53281, util=95.43% 00:11:29.647 nvme0n3: ios=3752/4096, merge=0/0, ticks=53358/51465, in_queue=104823, util=91.68% 00:11:29.647 nvme0n4: ios=2171/2560, merge=0/0, ticks=20740/32103, in_queue=52843, util=89.73% 00:11:29.647 16:17:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:29.647 [global] 00:11:29.647 thread=1 00:11:29.647 invalidate=1 00:11:29.647 rw=randwrite 00:11:29.647 time_based=1 00:11:29.647 runtime=1 00:11:29.647 ioengine=libaio 00:11:29.647 direct=1 00:11:29.647 bs=4096 00:11:29.647 iodepth=128 00:11:29.647 norandommap=0 00:11:29.647 numjobs=1 00:11:29.647 00:11:29.647 verify_dump=1 00:11:29.647 verify_backlog=512 00:11:29.647 verify_state_save=0 00:11:29.647 do_verify=1 00:11:29.647 verify=crc32c-intel 00:11:29.647 [job0] 00:11:29.647 filename=/dev/nvme0n1 00:11:29.647 [job1] 00:11:29.647 filename=/dev/nvme0n2 00:11:29.647 [job2] 00:11:29.647 filename=/dev/nvme0n3 00:11:29.647 [job3] 00:11:29.647 filename=/dev/nvme0n4 00:11:29.647 Could not set queue depth (nvme0n1) 00:11:29.647 Could not set queue depth (nvme0n2) 00:11:29.647 Could not set queue depth (nvme0n3) 00:11:29.647 Could not set queue depth (nvme0n4) 00:11:29.647 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:29.647 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:29.647 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:29.647 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:29.647 fio-3.35 00:11:29.647 Starting 4 threads 00:11:31.026 00:11:31.026 job0: (groupid=0, jobs=1): err= 0: pid=155458: Tue Nov 19 16:17:21 2024 00:11:31.026 read: IOPS=2507, BW=9.79MiB/s (10.3MB/s)(10.0MiB/1021msec) 00:11:31.026 slat (usec): min=3, max=21538, avg=153.75, stdev=1104.06 00:11:31.026 clat (usec): min=4677, max=62683, avg=18270.04, stdev=9015.98 00:11:31.026 lat (usec): min=4684, max=62702, avg=18423.79, stdev=9131.58 00:11:31.026 clat percentiles (usec): 00:11:31.026 | 1.00th=[ 6456], 5.00th=[ 9372], 10.00th=[10945], 20.00th=[11863], 00:11:31.026 | 30.00th=[11994], 40.00th=[12649], 50.00th=[15139], 60.00th=[17171], 00:11:31.027 | 70.00th=[20317], 80.00th=[25822], 90.00th=[30540], 95.00th=[38011], 00:11:31.027 | 99.00th=[43779], 99.50th=[55313], 99.90th=[62653], 99.95th=[62653], 00:11:31.027 | 99.99th=[62653] 00:11:31.027 write: IOPS=2983, BW=11.7MiB/s (12.2MB/s)(11.9MiB/1021msec); 0 zone resets 00:11:31.027 slat (usec): min=4, max=31951, avg=184.06, stdev=1164.43 00:11:31.027 clat (msec): min=2, max=117, avg=26.45, stdev=20.82 00:11:31.027 lat (msec): min=2, max=118, avg=26.64, stdev=20.92 00:11:31.027 clat percentiles (msec): 00:11:31.027 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:11:31.027 | 30.00th=[ 13], 40.00th=[ 16], 50.00th=[ 20], 60.00th=[ 25], 00:11:31.027 | 70.00th=[ 26], 80.00th=[ 39], 90.00th=[ 57], 95.00th=[ 69], 00:11:31.027 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 118], 99.95th=[ 118], 00:11:31.027 | 99.99th=[ 118] 00:11:31.027 bw ( KiB/s): min= 7808, max=15536, per=19.46%, avg=11672.00, stdev=5464.52, samples=2 00:11:31.027 iops : min= 1952, max= 3884, avg=2918.00, stdev=1366.13, samples=2 00:11:31.027 lat (msec) : 4=0.27%, 10=3.91%, 20=54.83%, 50=34.07%, 100=5.82% 00:11:31.027 lat (msec) : 250=1.11% 00:11:31.027 cpu : usr=5.10%, sys=6.18%, ctx=249, majf=0, minf=1 00:11:31.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:31.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.027 issued rwts: total=2560,3046,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.027 job1: (groupid=0, jobs=1): err= 0: pid=155459: Tue Nov 19 16:17:21 2024 00:11:31.027 read: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec) 00:11:31.027 slat (usec): min=3, max=40766, avg=123.42, stdev=1084.32 00:11:31.027 clat (usec): min=1627, max=78032, avg=15184.14, stdev=8964.84 00:11:31.027 lat (usec): min=1650, max=78059, avg=15307.55, stdev=9042.86 00:11:31.027 clat percentiles (usec): 00:11:31.027 | 1.00th=[ 4293], 5.00th=[ 7046], 10.00th=[ 8979], 20.00th=[10028], 00:11:31.027 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12387], 60.00th=[13173], 00:11:31.027 | 70.00th=[13960], 80.00th=[17695], 90.00th=[27657], 95.00th=[37487], 00:11:31.027 | 99.00th=[53740], 99.50th=[53740], 99.90th=[56886], 99.95th=[56886], 00:11:31.027 | 99.99th=[78119] 00:11:31.027 write: IOPS=4935, BW=19.3MiB/s (20.2MB/s)(19.5MiB/1011msec); 0 zone resets 00:11:31.027 slat (usec): min=4, max=18951, avg=77.38, stdev=516.74 00:11:31.027 clat (usec): min=1470, max=43518, avg=11662.93, stdev=4899.41 00:11:31.027 lat (usec): min=1479, max=43537, avg=11740.31, stdev=4958.54 00:11:31.027 clat percentiles (usec): 00:11:31.027 | 1.00th=[ 2999], 5.00th=[ 4948], 10.00th=[ 6456], 20.00th=[ 8848], 00:11:31.027 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10683], 60.00th=[11994], 00:11:31.027 | 70.00th=[13042], 80.00th=[13566], 90.00th=[18220], 95.00th=[23725], 00:11:31.027 | 99.00th=[25035], 99.50th=[25035], 99.90th=[29492], 99.95th=[40109], 00:11:31.027 | 99.99th=[43779] 00:11:31.027 bw ( KiB/s): min=17264, max=21640, per=32.43%, avg=19452.00, stdev=3094.30, samples=2 00:11:31.027 iops : min= 4316, max= 5410, avg=4863.00, stdev=773.57, samples=2 00:11:31.027 lat (msec) : 2=0.09%, 4=1.80%, 10=31.32%, 20=54.66%, 50=11.43% 00:11:31.027 lat (msec) : 100=0.70% 00:11:31.027 cpu : usr=4.16%, sys=6.24%, ctx=622, majf=0, minf=1 00:11:31.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:31.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.027 issued rwts: total=4608,4990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.027 job2: (groupid=0, jobs=1): err= 0: pid=155460: Tue Nov 19 16:17:21 2024 00:11:31.027 read: IOPS=2507, BW=9.79MiB/s (10.3MB/s)(10.0MiB/1021msec) 00:11:31.027 slat (usec): min=3, max=18648, avg=164.97, stdev=1234.80 00:11:31.027 clat (usec): min=5013, max=49770, avg=20901.90, stdev=8919.19 00:11:31.027 lat (usec): min=5031, max=49789, avg=21066.87, stdev=9031.03 00:11:31.027 clat percentiles (usec): 00:11:31.027 | 1.00th=[ 6849], 5.00th=[ 8356], 10.00th=[11207], 20.00th=[13173], 00:11:31.027 | 30.00th=[13698], 40.00th=[17171], 50.00th=[17695], 60.00th=[25035], 00:11:31.027 | 70.00th=[26346], 80.00th=[30540], 90.00th=[32900], 95.00th=[34341], 00:11:31.027 | 99.00th=[41681], 99.50th=[49021], 99.90th=[49021], 99.95th=[49021], 00:11:31.027 | 99.99th=[49546] 00:11:31.027 write: IOPS=2881, BW=11.3MiB/s (11.8MB/s)(11.5MiB/1021msec); 0 zone resets 00:11:31.027 slat (usec): min=5, max=28165, avg=172.82, stdev=1226.26 00:11:31.027 clat (usec): min=257, max=132007, avg=25791.60, stdev=23787.86 00:11:31.027 lat (usec): min=583, max=138372, avg=25964.42, stdev=23926.49 00:11:31.027 clat percentiles (usec): 00:11:31.027 | 1.00th=[ 1172], 5.00th=[ 2999], 10.00th=[ 5080], 20.00th=[ 11600], 00:11:31.027 | 30.00th=[ 12911], 40.00th=[ 18482], 50.00th=[ 23200], 60.00th=[ 24249], 00:11:31.027 | 70.00th=[ 25035], 80.00th=[ 30540], 90.00th=[ 44827], 95.00th=[ 69731], 00:11:31.027 | 99.00th=[126354], 99.50th=[129500], 99.90th=[131597], 99.95th=[131597], 00:11:31.027 | 99.99th=[131597] 00:11:31.027 bw ( KiB/s): min= 8192, max=14320, per=18.76%, avg=11256.00, stdev=4333.15, samples=2 00:11:31.027 iops : min= 2048, max= 3580, avg=2814.00, stdev=1083.29, samples=2 00:11:31.027 lat (usec) : 500=0.02%, 750=0.20%, 1000=0.20% 00:11:31.027 lat (msec) : 2=0.91%, 4=1.87%, 10=8.96%, 20=36.37%, 50=46.71% 00:11:31.027 lat (msec) : 100=2.89%, 250=1.87% 00:11:31.027 cpu : usr=3.82%, sys=7.16%, ctx=250, majf=0, minf=1 00:11:31.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:31.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.027 issued rwts: total=2560,2942,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.027 job3: (groupid=0, jobs=1): err= 0: pid=155461: Tue Nov 19 16:17:21 2024 00:11:31.027 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:11:31.027 slat (usec): min=2, max=25753, avg=112.36, stdev=930.87 00:11:31.027 clat (usec): min=5968, max=61231, avg=14891.10, stdev=7580.24 00:11:31.027 lat (usec): min=5974, max=61267, avg=15003.46, stdev=7658.85 00:11:31.027 clat percentiles (usec): 00:11:31.027 | 1.00th=[ 6194], 5.00th=[ 8979], 10.00th=[ 9896], 20.00th=[11863], 00:11:31.027 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12518], 60.00th=[13042], 00:11:31.027 | 70.00th=[13698], 80.00th=[15008], 90.00th=[23725], 95.00th=[35390], 00:11:31.027 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45351], 99.95th=[51643], 00:11:31.027 | 99.99th=[61080] 00:11:31.027 write: IOPS=4311, BW=16.8MiB/s (17.7MB/s)(16.9MiB/1005msec); 0 zone resets 00:11:31.027 slat (usec): min=3, max=33217, avg=110.39, stdev=1017.47 00:11:31.027 clat (usec): min=502, max=64546, avg=15369.55, stdev=9184.67 00:11:31.027 lat (usec): min=1957, max=64564, avg=15479.95, stdev=9257.43 00:11:31.027 clat percentiles (usec): 00:11:31.027 | 1.00th=[ 6194], 5.00th=[ 6849], 10.00th=[ 9110], 20.00th=[11469], 00:11:31.027 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12649], 60.00th=[13042], 00:11:31.027 | 70.00th=[13304], 80.00th=[16581], 90.00th=[31065], 95.00th=[38536], 00:11:31.027 | 99.00th=[52691], 99.50th=[52691], 99.90th=[53216], 99.95th=[58459], 00:11:31.027 | 99.99th=[64750] 00:11:31.027 bw ( KiB/s): min=16384, max=17256, per=28.04%, avg=16820.00, stdev=616.60, samples=2 00:11:31.027 iops : min= 4096, max= 4314, avg=4205.00, stdev=154.15, samples=2 00:11:31.027 lat (usec) : 750=0.01% 00:11:31.027 lat (msec) : 2=0.09%, 4=0.21%, 10=11.18%, 20=77.14%, 50=10.55% 00:11:31.027 lat (msec) : 100=0.82% 00:11:31.027 cpu : usr=3.39%, sys=6.57%, ctx=257, majf=0, minf=2 00:11:31.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:31.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:31.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:31.027 issued rwts: total=4096,4333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:31.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:31.027 00:11:31.027 Run status group 0 (all jobs): 00:11:31.027 READ: bw=52.9MiB/s (55.5MB/s), 9.79MiB/s-17.8MiB/s (10.3MB/s-18.7MB/s), io=54.0MiB (56.6MB), run=1005-1021msec 00:11:31.027 WRITE: bw=58.6MiB/s (61.4MB/s), 11.3MiB/s-19.3MiB/s (11.8MB/s-20.2MB/s), io=59.8MiB (62.7MB), run=1005-1021msec 00:11:31.027 00:11:31.027 Disk stats (read/write): 00:11:31.027 nvme0n1: ios=2583/2575, merge=0/0, ticks=42490/52627, in_queue=95117, util=99.20% 00:11:31.027 nvme0n2: ios=3722/4096, merge=0/0, ticks=58695/46390, in_queue=105085, util=99.29% 00:11:31.027 nvme0n3: ios=2079/2154, merge=0/0, ticks=43903/60481, in_queue=104384, util=99.16% 00:11:31.027 nvme0n4: ios=3261/3584, merge=0/0, ticks=28552/31834, in_queue=60386, util=89.66% 00:11:31.027 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:31.027 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=155599 00:11:31.027 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:31.027 16:17:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:31.027 [global] 00:11:31.027 thread=1 00:11:31.027 invalidate=1 00:11:31.027 rw=read 00:11:31.027 time_based=1 00:11:31.027 runtime=10 00:11:31.027 ioengine=libaio 00:11:31.027 direct=1 00:11:31.027 bs=4096 00:11:31.027 iodepth=1 00:11:31.027 norandommap=1 00:11:31.027 numjobs=1 00:11:31.027 00:11:31.027 [job0] 00:11:31.027 filename=/dev/nvme0n1 00:11:31.027 [job1] 00:11:31.027 filename=/dev/nvme0n2 00:11:31.027 [job2] 00:11:31.027 filename=/dev/nvme0n3 00:11:31.027 [job3] 00:11:31.027 filename=/dev/nvme0n4 00:11:31.027 Could not set queue depth (nvme0n1) 00:11:31.027 Could not set queue depth (nvme0n2) 00:11:31.027 Could not set queue depth (nvme0n3) 00:11:31.027 Could not set queue depth (nvme0n4) 00:11:31.027 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.027 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.027 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.028 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:31.028 fio-3.35 00:11:31.028 Starting 4 threads 00:11:34.328 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:34.328 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:34.328 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=26148864, buflen=4096 00:11:34.328 fio: pid=155704, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:34.587 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:34.587 16:17:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:34.587 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=319488, buflen=4096 00:11:34.587 fio: pid=155703, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:34.845 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=856064, buflen=4096 00:11:34.845 fio: pid=155699, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:34.845 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:34.845 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:35.105 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.105 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:35.105 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1302528, buflen=4096 00:11:35.105 fio: pid=155700, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:11:35.105 00:11:35.105 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=155699: Tue Nov 19 16:17:25 2024 00:11:35.105 read: IOPS=59, BW=235KiB/s (241kB/s)(836KiB/3557msec) 00:11:35.105 slat (usec): min=5, max=10928, avg=105.24, stdev=927.32 00:11:35.105 clat (usec): min=275, max=42449, avg=16860.47, stdev=20340.92 00:11:35.105 lat (usec): min=285, max=53022, avg=16966.08, stdev=20478.96 00:11:35.105 clat percentiles (usec): 00:11:35.105 | 1.00th=[ 285], 5.00th=[ 310], 10.00th=[ 343], 20.00th=[ 363], 00:11:35.105 | 30.00th=[ 379], 40.00th=[ 408], 50.00th=[ 437], 60.00th=[ 635], 00:11:35.105 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:35.105 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:35.105 | 99.99th=[42206] 00:11:35.105 bw ( KiB/s): min= 96, max= 472, per=3.61%, avg=261.33, stdev=172.15, samples=6 00:11:35.105 iops : min= 24, max= 118, avg=65.33, stdev=43.04, samples=6 00:11:35.105 lat (usec) : 500=55.24%, 750=4.76% 00:11:35.105 lat (msec) : 50=39.52% 00:11:35.105 cpu : usr=0.08%, sys=0.08%, ctx=212, majf=0, minf=2 00:11:35.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.105 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.105 issued rwts: total=210,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.105 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=155700: Tue Nov 19 16:17:25 2024 00:11:35.105 read: IOPS=82, BW=329KiB/s (337kB/s)(1272KiB/3869msec) 00:11:35.105 slat (usec): min=6, max=5928, avg=53.10, stdev=428.15 00:11:35.105 clat (usec): min=242, max=43941, avg=12032.51, stdev=18741.31 00:11:35.105 lat (usec): min=258, max=46981, avg=12085.72, stdev=18799.08 00:11:35.105 clat percentiles (usec): 00:11:35.105 | 1.00th=[ 247], 5.00th=[ 253], 10.00th=[ 255], 20.00th=[ 262], 00:11:35.105 | 30.00th=[ 269], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 285], 00:11:35.105 | 70.00th=[ 375], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:11:35.105 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43779], 99.95th=[43779], 00:11:35.105 | 99.99th=[43779] 00:11:35.105 bw ( KiB/s): min= 93, max= 512, per=4.89%, avg=353.86, stdev=141.90, samples=7 00:11:35.105 iops : min= 23, max= 128, avg=88.43, stdev=35.55, samples=7 00:11:35.105 lat (usec) : 250=3.45%, 500=68.03% 00:11:35.105 lat (msec) : 50=28.21% 00:11:35.105 cpu : usr=0.10%, sys=0.23%, ctx=322, majf=0, minf=1 00:11:35.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.105 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.105 issued rwts: total=319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.105 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=155703: Tue Nov 19 16:17:25 2024 00:11:35.105 read: IOPS=23, BW=94.7KiB/s (97.0kB/s)(312KiB/3295msec) 00:11:35.105 slat (nsec): min=11980, max=36079, avg=23540.72, stdev=9908.24 00:11:35.105 clat (usec): min=40942, max=42041, avg=41914.72, stdev=223.25 00:11:35.105 lat (usec): min=40977, max=42056, avg=41938.39, stdev=222.18 00:11:35.105 clat percentiles (usec): 00:11:35.105 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:11:35.105 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:11:35.105 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:11:35.105 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:11:35.105 | 99.99th=[42206] 00:11:35.105 bw ( KiB/s): min= 88, max= 96, per=1.30%, avg=94.67, stdev= 3.27, samples=6 00:11:35.105 iops : min= 22, max= 24, avg=23.67, stdev= 0.82, samples=6 00:11:35.105 lat (msec) : 50=98.73% 00:11:35.105 cpu : usr=0.12%, sys=0.00%, ctx=79, majf=0, minf=1 00:11:35.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.105 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.105 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.105 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=155704: Tue Nov 19 16:17:25 2024 00:11:35.105 read: IOPS=2141, BW=8566KiB/s (8772kB/s)(24.9MiB/2981msec) 00:11:35.105 slat (nsec): min=5358, max=63805, avg=13322.40, stdev=5405.73 00:11:35.105 clat (usec): min=182, max=41070, avg=446.72, stdev=2830.57 00:11:35.105 lat (usec): min=188, max=41088, avg=460.04, stdev=2831.17 00:11:35.105 clat percentiles (usec): 00:11:35.105 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 212], 20.00th=[ 233], 00:11:35.105 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 255], 00:11:35.105 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 273], 95.00th=[ 285], 00:11:35.105 | 99.00th=[ 359], 99.50th=[ 914], 99.90th=[41157], 99.95th=[41157], 00:11:35.105 | 99.99th=[41157] 00:11:35.105 bw ( KiB/s): min= 96, max=15880, per=100.00%, avg=10195.20, stdev=6947.20, samples=5 00:11:35.105 iops : min= 24, max= 3970, avg=2548.80, stdev=1736.80, samples=5 00:11:35.105 lat (usec) : 250=48.75%, 500=50.57%, 750=0.16%, 1000=0.02% 00:11:35.105 lat (msec) : 50=0.49% 00:11:35.105 cpu : usr=1.51%, sys=4.83%, ctx=6385, majf=0, minf=2 00:11:35.105 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:35.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.105 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:35.105 issued rwts: total=6385,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:35.105 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:35.105 00:11:35.105 Run status group 0 (all jobs): 00:11:35.105 READ: bw=7226KiB/s (7399kB/s), 94.7KiB/s-8566KiB/s (97.0kB/s-8772kB/s), io=27.3MiB (28.6MB), run=2981-3869msec 00:11:35.105 00:11:35.105 Disk stats (read/write): 00:11:35.105 nvme0n1: ios=204/0, merge=0/0, ticks=3315/0, in_queue=3315, util=95.71% 00:11:35.105 nvme0n2: ios=318/0, merge=0/0, ticks=3826/0, in_queue=3826, util=96.53% 00:11:35.105 nvme0n3: ios=73/0, merge=0/0, ticks=3062/0, in_queue=3062, util=96.76% 00:11:35.105 nvme0n4: ios=6381/0, merge=0/0, ticks=2627/0, in_queue=2627, util=96.74% 00:11:35.363 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.363 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:35.622 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.622 16:17:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:35.880 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:35.880 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:36.452 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:36.452 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:36.452 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:36.452 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 155599 00:11:36.452 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:36.452 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:36.712 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.712 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:36.712 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:11:36.712 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:36.712 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.712 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:36.712 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:36.712 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:11:36.712 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:36.712 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:36.712 nvmf hotplug test: fio failed as expected 00:11:36.712 16:17:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:36.973 rmmod nvme_tcp 00:11:36.973 rmmod nvme_fabrics 00:11:36.973 rmmod nvme_keyring 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 153561 ']' 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 153561 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 153561 ']' 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 153561 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 153561 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 153561' 00:11:36.973 killing process with pid 153561 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 153561 00:11:36.973 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 153561 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.233 16:17:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:39.780 00:11:39.780 real 0m24.344s 00:11:39.780 user 1m26.386s 00:11:39.780 sys 0m6.359s 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:39.780 ************************************ 00:11:39.780 END TEST nvmf_fio_target 00:11:39.780 ************************************ 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:39.780 ************************************ 00:11:39.780 START TEST nvmf_bdevio 00:11:39.780 ************************************ 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:39.780 * Looking for test storage... 00:11:39.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:39.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.780 --rc genhtml_branch_coverage=1 00:11:39.780 --rc genhtml_function_coverage=1 00:11:39.780 --rc genhtml_legend=1 00:11:39.780 --rc geninfo_all_blocks=1 00:11:39.780 --rc geninfo_unexecuted_blocks=1 00:11:39.780 00:11:39.780 ' 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:39.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.780 --rc genhtml_branch_coverage=1 00:11:39.780 --rc genhtml_function_coverage=1 00:11:39.780 --rc genhtml_legend=1 00:11:39.780 --rc geninfo_all_blocks=1 00:11:39.780 --rc geninfo_unexecuted_blocks=1 00:11:39.780 00:11:39.780 ' 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:39.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.780 --rc genhtml_branch_coverage=1 00:11:39.780 --rc genhtml_function_coverage=1 00:11:39.780 --rc genhtml_legend=1 00:11:39.780 --rc geninfo_all_blocks=1 00:11:39.780 --rc geninfo_unexecuted_blocks=1 00:11:39.780 00:11:39.780 ' 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:39.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:39.780 --rc genhtml_branch_coverage=1 00:11:39.780 --rc genhtml_function_coverage=1 00:11:39.780 --rc genhtml_legend=1 00:11:39.780 --rc geninfo_all_blocks=1 00:11:39.780 --rc geninfo_unexecuted_blocks=1 00:11:39.780 00:11:39.780 ' 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:39.780 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:39.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:11:39.781 16:17:29 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:11:41.694 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:41.695 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:41.695 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:41.695 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:41.695 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:41.695 16:17:31 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:41.955 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:41.955 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:41.955 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:41.955 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:41.955 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:41.955 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:41.955 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:41.955 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:41.955 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.955 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:11:41.955 00:11:41.956 --- 10.0.0.2 ping statistics --- 00:11:41.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.956 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:41.956 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.956 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:11:41.956 00:11:41.956 --- 10.0.0.1 ping statistics --- 00:11:41.956 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.956 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=158341 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 158341 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 158341 ']' 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.956 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:41.956 [2024-11-19 16:17:32.231757] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:11:41.956 [2024-11-19 16:17:32.231836] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.215 [2024-11-19 16:17:32.306385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:42.215 [2024-11-19 16:17:32.352544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.215 [2024-11-19 16:17:32.352602] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.215 [2024-11-19 16:17:32.352629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.215 [2024-11-19 16:17:32.352641] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.215 [2024-11-19 16:17:32.352651] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.215 [2024-11-19 16:17:32.354303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:42.215 [2024-11-19 16:17:32.354333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:42.215 [2024-11-19 16:17:32.354412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:42.215 [2024-11-19 16:17:32.354415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.215 [2024-11-19 16:17:32.490510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.215 Malloc0 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.215 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:42.215 [2024-11-19 16:17:32.550882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:42.476 { 00:11:42.476 "params": { 00:11:42.476 "name": "Nvme$subsystem", 00:11:42.476 "trtype": "$TEST_TRANSPORT", 00:11:42.476 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:42.476 "adrfam": "ipv4", 00:11:42.476 "trsvcid": "$NVMF_PORT", 00:11:42.476 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:42.476 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:42.476 "hdgst": ${hdgst:-false}, 00:11:42.476 "ddgst": ${ddgst:-false} 00:11:42.476 }, 00:11:42.476 "method": "bdev_nvme_attach_controller" 00:11:42.476 } 00:11:42.476 EOF 00:11:42.476 )") 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:11:42.476 16:17:32 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:42.476 "params": { 00:11:42.476 "name": "Nvme1", 00:11:42.476 "trtype": "tcp", 00:11:42.476 "traddr": "10.0.0.2", 00:11:42.476 "adrfam": "ipv4", 00:11:42.476 "trsvcid": "4420", 00:11:42.476 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.476 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.476 "hdgst": false, 00:11:42.476 "ddgst": false 00:11:42.476 }, 00:11:42.476 "method": "bdev_nvme_attach_controller" 00:11:42.476 }' 00:11:42.476 [2024-11-19 16:17:32.596781] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:11:42.476 [2024-11-19 16:17:32.596851] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158489 ] 00:11:42.476 [2024-11-19 16:17:32.666182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:42.476 [2024-11-19 16:17:32.716188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.476 [2024-11-19 16:17:32.716242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.476 [2024-11-19 16:17:32.716246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:42.737 I/O targets: 00:11:42.737 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:42.737 00:11:42.737 00:11:42.737 CUnit - A unit testing framework for C - Version 2.1-3 00:11:42.737 http://cunit.sourceforge.net/ 00:11:42.737 00:11:42.737 00:11:42.737 Suite: bdevio tests on: Nvme1n1 00:11:42.997 Test: blockdev write read block ...passed 00:11:42.997 Test: blockdev write zeroes read block ...passed 00:11:42.997 Test: blockdev write zeroes read no split ...passed 00:11:42.997 Test: blockdev write zeroes read split ...passed 00:11:42.997 Test: blockdev write zeroes read split partial ...passed 00:11:42.997 Test: blockdev reset ...[2024-11-19 16:17:33.212860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:11:42.997 [2024-11-19 16:17:33.212967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1067b70 (9): Bad file descriptor 00:11:42.997 [2024-11-19 16:17:33.232930] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:11:42.997 passed 00:11:42.997 Test: blockdev write read 8 blocks ...passed 00:11:42.997 Test: blockdev write read size > 128k ...passed 00:11:42.997 Test: blockdev write read invalid size ...passed 00:11:42.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:42.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:42.997 Test: blockdev write read max offset ...passed 00:11:43.257 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:43.257 Test: blockdev writev readv 8 blocks ...passed 00:11:43.257 Test: blockdev writev readv 30 x 1block ...passed 00:11:43.257 Test: blockdev writev readv block ...passed 00:11:43.257 Test: blockdev writev readv size > 128k ...passed 00:11:43.257 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:43.257 Test: blockdev comparev and writev ...[2024-11-19 16:17:33.447432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.257 [2024-11-19 16:17:33.447469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:43.257 [2024-11-19 16:17:33.447494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.257 [2024-11-19 16:17:33.447511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:43.257 [2024-11-19 16:17:33.447838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.257 [2024-11-19 16:17:33.447872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:43.257 [2024-11-19 16:17:33.447894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.258 [2024-11-19 16:17:33.447921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:43.258 [2024-11-19 16:17:33.448258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.258 [2024-11-19 16:17:33.448292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:43.258 [2024-11-19 16:17:33.448315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.258 [2024-11-19 16:17:33.448339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:43.258 [2024-11-19 16:17:33.448661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.258 [2024-11-19 16:17:33.448684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:43.258 [2024-11-19 16:17:33.448706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:43.258 [2024-11-19 16:17:33.448722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:43.258 passed 00:11:43.258 Test: blockdev nvme passthru rw ...passed 00:11:43.258 Test: blockdev nvme passthru vendor specific ...[2024-11-19 16:17:33.531320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:43.258 [2024-11-19 16:17:33.531348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:43.258 [2024-11-19 16:17:33.531489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:43.258 [2024-11-19 16:17:33.531512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:43.258 [2024-11-19 16:17:33.531643] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:43.258 [2024-11-19 16:17:33.531666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:43.258 [2024-11-19 16:17:33.531799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:43.258 [2024-11-19 16:17:33.531822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:43.258 passed 00:11:43.258 Test: blockdev nvme admin passthru ...passed 00:11:43.258 Test: blockdev copy ...passed 00:11:43.258 00:11:43.258 Run Summary: Type Total Ran Passed Failed Inactive 00:11:43.258 suites 1 1 n/a 0 0 00:11:43.258 tests 23 23 23 0 0 00:11:43.258 asserts 152 152 152 0 n/a 00:11:43.258 00:11:43.258 Elapsed time = 1.066 seconds 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:43.517 rmmod nvme_tcp 00:11:43.517 rmmod nvme_fabrics 00:11:43.517 rmmod nvme_keyring 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 158341 ']' 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 158341 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 158341 ']' 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 158341 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 158341 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 158341' 00:11:43.517 killing process with pid 158341 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 158341 00:11:43.517 16:17:33 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 158341 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.776 16:17:34 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:46.322 00:11:46.322 real 0m6.508s 00:11:46.322 user 0m10.046s 00:11:46.322 sys 0m2.201s 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:46.322 ************************************ 00:11:46.322 END TEST nvmf_bdevio 00:11:46.322 ************************************ 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:46.322 00:11:46.322 real 3m55.634s 00:11:46.322 user 10m12.728s 00:11:46.322 sys 1m7.251s 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:46.322 ************************************ 00:11:46.322 END TEST nvmf_target_core 00:11:46.322 ************************************ 00:11:46.322 16:17:36 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:46.322 16:17:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.322 16:17:36 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.322 16:17:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:46.322 ************************************ 00:11:46.322 START TEST nvmf_target_extra 00:11:46.322 ************************************ 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:46.322 * Looking for test storage... 00:11:46.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:46.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.322 --rc genhtml_branch_coverage=1 00:11:46.322 --rc genhtml_function_coverage=1 00:11:46.322 --rc genhtml_legend=1 00:11:46.322 --rc geninfo_all_blocks=1 00:11:46.322 --rc geninfo_unexecuted_blocks=1 00:11:46.322 00:11:46.322 ' 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:46.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.322 --rc genhtml_branch_coverage=1 00:11:46.322 --rc genhtml_function_coverage=1 00:11:46.322 --rc genhtml_legend=1 00:11:46.322 --rc geninfo_all_blocks=1 00:11:46.322 --rc geninfo_unexecuted_blocks=1 00:11:46.322 00:11:46.322 ' 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:46.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.322 --rc genhtml_branch_coverage=1 00:11:46.322 --rc genhtml_function_coverage=1 00:11:46.322 --rc genhtml_legend=1 00:11:46.322 --rc geninfo_all_blocks=1 00:11:46.322 --rc geninfo_unexecuted_blocks=1 00:11:46.322 00:11:46.322 ' 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:46.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.322 --rc genhtml_branch_coverage=1 00:11:46.322 --rc genhtml_function_coverage=1 00:11:46.322 --rc genhtml_legend=1 00:11:46.322 --rc geninfo_all_blocks=1 00:11:46.322 --rc geninfo_unexecuted_blocks=1 00:11:46.322 00:11:46.322 ' 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.322 16:17:36 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.323 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:46.323 ************************************ 00:11:46.323 START TEST nvmf_example 00:11:46.323 ************************************ 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:46.323 * Looking for test storage... 00:11:46.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:46.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.323 --rc genhtml_branch_coverage=1 00:11:46.323 --rc genhtml_function_coverage=1 00:11:46.323 --rc genhtml_legend=1 00:11:46.323 --rc geninfo_all_blocks=1 00:11:46.323 --rc geninfo_unexecuted_blocks=1 00:11:46.323 00:11:46.323 ' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:46.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.323 --rc genhtml_branch_coverage=1 00:11:46.323 --rc genhtml_function_coverage=1 00:11:46.323 --rc genhtml_legend=1 00:11:46.323 --rc geninfo_all_blocks=1 00:11:46.323 --rc geninfo_unexecuted_blocks=1 00:11:46.323 00:11:46.323 ' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:46.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.323 --rc genhtml_branch_coverage=1 00:11:46.323 --rc genhtml_function_coverage=1 00:11:46.323 --rc genhtml_legend=1 00:11:46.323 --rc geninfo_all_blocks=1 00:11:46.323 --rc geninfo_unexecuted_blocks=1 00:11:46.323 00:11:46.323 ' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:46.323 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:46.323 --rc genhtml_branch_coverage=1 00:11:46.323 --rc genhtml_function_coverage=1 00:11:46.323 --rc genhtml_legend=1 00:11:46.323 --rc geninfo_all_blocks=1 00:11:46.323 --rc geninfo_unexecuted_blocks=1 00:11:46.323 00:11:46.323 ' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:46.323 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:46.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:46.324 16:17:36 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:48.864 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:48.864 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:48.864 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:48.865 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:48.865 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:48.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:48.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:11:48.865 00:11:48.865 --- 10.0.0.2 ping statistics --- 00:11:48.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.865 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:48.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:48.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.157 ms 00:11:48.865 00:11:48.865 --- 10.0.0.1 ping statistics --- 00:11:48.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:48.865 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=160633 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 160633 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 160633 ']' 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:48.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:48.865 16:17:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.865 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:49.126 16:17:39 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:59.113 Initializing NVMe Controllers 00:11:59.113 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:59.113 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:59.113 Initialization complete. Launching workers. 00:11:59.113 ======================================================== 00:11:59.113 Latency(us) 00:11:59.113 Device Information : IOPS MiB/s Average min max 00:11:59.113 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14768.28 57.69 4333.21 898.86 19136.30 00:11:59.113 ======================================================== 00:11:59.113 Total : 14768.28 57.69 4333.21 898.86 19136.30 00:11:59.113 00:11:59.113 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:59.113 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:59.113 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.113 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:59.113 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.113 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:59.113 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.113 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.113 rmmod nvme_tcp 00:11:59.113 rmmod nvme_fabrics 00:11:59.374 rmmod nvme_keyring 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 160633 ']' 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 160633 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 160633 ']' 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 160633 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 160633 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 160633' 00:11:59.374 killing process with pid 160633 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 160633 00:11:59.374 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 160633 00:11:59.633 nvmf threads initialize successfully 00:11:59.633 bdev subsystem init successfully 00:11:59.633 created a nvmf target service 00:11:59.633 create targets's poll groups done 00:11:59.633 all subsystems of target started 00:11:59.633 nvmf target is running 00:11:59.633 all subsystems of target stopped 00:11:59.633 destroy targets's poll groups done 00:11:59.633 destroyed the nvmf target service 00:11:59.633 bdev subsystem finish successfully 00:11:59.633 nvmf threads destroy successfully 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.634 16:17:49 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.547 00:12:01.547 real 0m15.459s 00:12:01.547 user 0m41.563s 00:12:01.547 sys 0m3.782s 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:01.547 ************************************ 00:12:01.547 END TEST nvmf_example 00:12:01.547 ************************************ 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:01.547 ************************************ 00:12:01.547 START TEST nvmf_filesystem 00:12:01.547 ************************************ 00:12:01.547 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:01.809 * Looking for test storage... 00:12:01.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.809 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:01.809 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:01.809 16:17:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.809 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:01.809 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.809 --rc genhtml_branch_coverage=1 00:12:01.809 --rc genhtml_function_coverage=1 00:12:01.809 --rc genhtml_legend=1 00:12:01.810 --rc geninfo_all_blocks=1 00:12:01.810 --rc geninfo_unexecuted_blocks=1 00:12:01.810 00:12:01.810 ' 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.810 --rc genhtml_branch_coverage=1 00:12:01.810 --rc genhtml_function_coverage=1 00:12:01.810 --rc genhtml_legend=1 00:12:01.810 --rc geninfo_all_blocks=1 00:12:01.810 --rc geninfo_unexecuted_blocks=1 00:12:01.810 00:12:01.810 ' 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.810 --rc genhtml_branch_coverage=1 00:12:01.810 --rc genhtml_function_coverage=1 00:12:01.810 --rc genhtml_legend=1 00:12:01.810 --rc geninfo_all_blocks=1 00:12:01.810 --rc geninfo_unexecuted_blocks=1 00:12:01.810 00:12:01.810 ' 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:01.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.810 --rc genhtml_branch_coverage=1 00:12:01.810 --rc genhtml_function_coverage=1 00:12:01.810 --rc genhtml_legend=1 00:12:01.810 --rc geninfo_all_blocks=1 00:12:01.810 --rc geninfo_unexecuted_blocks=1 00:12:01.810 00:12:01.810 ' 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:01.810 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:01.811 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:01.811 #define SPDK_CONFIG_H 00:12:01.811 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:01.811 #define SPDK_CONFIG_APPS 1 00:12:01.811 #define SPDK_CONFIG_ARCH native 00:12:01.811 #undef SPDK_CONFIG_ASAN 00:12:01.811 #undef SPDK_CONFIG_AVAHI 00:12:01.811 #undef SPDK_CONFIG_CET 00:12:01.811 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:01.811 #define SPDK_CONFIG_COVERAGE 1 00:12:01.811 #define SPDK_CONFIG_CROSS_PREFIX 00:12:01.811 #undef SPDK_CONFIG_CRYPTO 00:12:01.811 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:01.811 #undef SPDK_CONFIG_CUSTOMOCF 00:12:01.811 #undef SPDK_CONFIG_DAOS 00:12:01.811 #define SPDK_CONFIG_DAOS_DIR 00:12:01.811 #define SPDK_CONFIG_DEBUG 1 00:12:01.811 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:01.811 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:01.811 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:12:01.811 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:01.811 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:01.811 #undef SPDK_CONFIG_DPDK_UADK 00:12:01.811 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:01.811 #define SPDK_CONFIG_EXAMPLES 1 00:12:01.811 #undef SPDK_CONFIG_FC 00:12:01.811 #define SPDK_CONFIG_FC_PATH 00:12:01.811 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:01.811 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:01.811 #define SPDK_CONFIG_FSDEV 1 00:12:01.811 #undef SPDK_CONFIG_FUSE 00:12:01.811 #undef SPDK_CONFIG_FUZZER 00:12:01.811 #define SPDK_CONFIG_FUZZER_LIB 00:12:01.811 #undef SPDK_CONFIG_GOLANG 00:12:01.811 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:01.811 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:01.811 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:01.811 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:01.811 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:01.811 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:01.811 #undef SPDK_CONFIG_HAVE_LZ4 00:12:01.811 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:01.811 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:01.811 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:01.811 #define SPDK_CONFIG_IDXD 1 00:12:01.811 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:01.811 #undef SPDK_CONFIG_IPSEC_MB 00:12:01.811 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:01.811 #define SPDK_CONFIG_ISAL 1 00:12:01.811 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:01.811 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:01.811 #define SPDK_CONFIG_LIBDIR 00:12:01.811 #undef SPDK_CONFIG_LTO 00:12:01.811 #define SPDK_CONFIG_MAX_LCORES 128 00:12:01.811 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:01.811 #define SPDK_CONFIG_NVME_CUSE 1 00:12:01.811 #undef SPDK_CONFIG_OCF 00:12:01.811 #define SPDK_CONFIG_OCF_PATH 00:12:01.811 #define SPDK_CONFIG_OPENSSL_PATH 00:12:01.811 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:01.811 #define SPDK_CONFIG_PGO_DIR 00:12:01.811 #undef SPDK_CONFIG_PGO_USE 00:12:01.811 #define SPDK_CONFIG_PREFIX /usr/local 00:12:01.811 #undef SPDK_CONFIG_RAID5F 00:12:01.811 #undef SPDK_CONFIG_RBD 00:12:01.811 #define SPDK_CONFIG_RDMA 1 00:12:01.811 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:01.811 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:01.811 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:01.812 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:01.812 #define SPDK_CONFIG_SHARED 1 00:12:01.812 #undef SPDK_CONFIG_SMA 00:12:01.812 #define SPDK_CONFIG_TESTS 1 00:12:01.812 #undef SPDK_CONFIG_TSAN 00:12:01.812 #define SPDK_CONFIG_UBLK 1 00:12:01.812 #define SPDK_CONFIG_UBSAN 1 00:12:01.812 #undef SPDK_CONFIG_UNIT_TESTS 00:12:01.812 #undef SPDK_CONFIG_URING 00:12:01.812 #define SPDK_CONFIG_URING_PATH 00:12:01.812 #undef SPDK_CONFIG_URING_ZNS 00:12:01.812 #undef SPDK_CONFIG_USDT 00:12:01.812 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:01.812 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:01.812 #define SPDK_CONFIG_VFIO_USER 1 00:12:01.812 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:01.812 #define SPDK_CONFIG_VHOST 1 00:12:01.812 #define SPDK_CONFIG_VIRTIO 1 00:12:01.812 #undef SPDK_CONFIG_VTUNE 00:12:01.812 #define SPDK_CONFIG_VTUNE_DIR 00:12:01.812 #define SPDK_CONFIG_WERROR 1 00:12:01.812 #define SPDK_CONFIG_WPDK_DIR 00:12:01.812 #undef SPDK_CONFIG_XNVME 00:12:01.812 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:01.812 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:01.813 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:01.814 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j48 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 162337 ]] 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 162337 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.aI6ybD 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.aI6ybD/tests/target /tmp/spdk.aI6ybD 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:01.815 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=54353731584 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=61988528128 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7634796544 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30984232960 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=12375273472 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=12397707264 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=22433792 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=30993952768 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=30994264064 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=311296 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=6198837248 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=6198849536 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:02.076 * Looking for test storage... 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=54353731584 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9849389056 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:02.076 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:02.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.077 --rc genhtml_branch_coverage=1 00:12:02.077 --rc genhtml_function_coverage=1 00:12:02.077 --rc genhtml_legend=1 00:12:02.077 --rc geninfo_all_blocks=1 00:12:02.077 --rc geninfo_unexecuted_blocks=1 00:12:02.077 00:12:02.077 ' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:02.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.077 --rc genhtml_branch_coverage=1 00:12:02.077 --rc genhtml_function_coverage=1 00:12:02.077 --rc genhtml_legend=1 00:12:02.077 --rc geninfo_all_blocks=1 00:12:02.077 --rc geninfo_unexecuted_blocks=1 00:12:02.077 00:12:02.077 ' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:02.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.077 --rc genhtml_branch_coverage=1 00:12:02.077 --rc genhtml_function_coverage=1 00:12:02.077 --rc genhtml_legend=1 00:12:02.077 --rc geninfo_all_blocks=1 00:12:02.077 --rc geninfo_unexecuted_blocks=1 00:12:02.077 00:12:02.077 ' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:02.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:02.077 --rc genhtml_branch_coverage=1 00:12:02.077 --rc genhtml_function_coverage=1 00:12:02.077 --rc genhtml_legend=1 00:12:02.077 --rc geninfo_all_blocks=1 00:12:02.077 --rc geninfo_unexecuted_blocks=1 00:12:02.077 00:12:02.077 ' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:02.077 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.077 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.078 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.078 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:02.078 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:02.078 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:12:02.078 16:17:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:04.618 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:04.618 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.618 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:04.619 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:04.619 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:04.619 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.619 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:12:04.619 00:12:04.619 --- 10.0.0.2 ping statistics --- 00:12:04.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.619 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:04.619 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.619 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:12:04.619 00:12:04.619 --- 10.0.0.1 ping statistics --- 00:12:04.619 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.619 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:04.619 ************************************ 00:12:04.619 START TEST nvmf_filesystem_no_in_capsule 00:12:04.619 ************************************ 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=163985 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 163985 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 163985 ']' 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.619 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.619 [2024-11-19 16:17:54.748545] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:12:04.619 [2024-11-19 16:17:54.748642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:04.619 [2024-11-19 16:17:54.821905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:04.619 [2024-11-19 16:17:54.872229] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:04.619 [2024-11-19 16:17:54.872289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:04.619 [2024-11-19 16:17:54.872318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:04.619 [2024-11-19 16:17:54.872330] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:04.619 [2024-11-19 16:17:54.872340] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:04.619 [2024-11-19 16:17:54.873925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.619 [2024-11-19 16:17:54.873992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:04.619 [2024-11-19 16:17:54.874059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:04.619 [2024-11-19 16:17:54.874062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.880 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:04.880 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:04.880 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.880 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:04.880 16:17:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.880 [2024-11-19 16:17:55.024239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.880 Malloc1 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.880 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.881 [2024-11-19 16:17:55.205323] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.881 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.142 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.142 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:05.142 { 00:12:05.142 "name": "Malloc1", 00:12:05.142 "aliases": [ 00:12:05.142 "6b4dc3e8-cf38-4d6e-bf46-e88f08948c30" 00:12:05.142 ], 00:12:05.142 "product_name": "Malloc disk", 00:12:05.142 "block_size": 512, 00:12:05.142 "num_blocks": 1048576, 00:12:05.142 "uuid": "6b4dc3e8-cf38-4d6e-bf46-e88f08948c30", 00:12:05.142 "assigned_rate_limits": { 00:12:05.142 "rw_ios_per_sec": 0, 00:12:05.142 "rw_mbytes_per_sec": 0, 00:12:05.142 "r_mbytes_per_sec": 0, 00:12:05.142 "w_mbytes_per_sec": 0 00:12:05.142 }, 00:12:05.142 "claimed": true, 00:12:05.142 "claim_type": "exclusive_write", 00:12:05.142 "zoned": false, 00:12:05.142 "supported_io_types": { 00:12:05.142 "read": true, 00:12:05.142 "write": true, 00:12:05.142 "unmap": true, 00:12:05.142 "flush": true, 00:12:05.142 "reset": true, 00:12:05.142 "nvme_admin": false, 00:12:05.142 "nvme_io": false, 00:12:05.142 "nvme_io_md": false, 00:12:05.142 "write_zeroes": true, 00:12:05.142 "zcopy": true, 00:12:05.142 "get_zone_info": false, 00:12:05.142 "zone_management": false, 00:12:05.142 "zone_append": false, 00:12:05.142 "compare": false, 00:12:05.142 "compare_and_write": false, 00:12:05.142 "abort": true, 00:12:05.142 "seek_hole": false, 00:12:05.142 "seek_data": false, 00:12:05.142 "copy": true, 00:12:05.142 "nvme_iov_md": false 00:12:05.142 }, 00:12:05.142 "memory_domains": [ 00:12:05.142 { 00:12:05.142 "dma_device_id": "system", 00:12:05.142 "dma_device_type": 1 00:12:05.142 }, 00:12:05.142 { 00:12:05.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.142 "dma_device_type": 2 00:12:05.142 } 00:12:05.142 ], 00:12:05.142 "driver_specific": {} 00:12:05.142 } 00:12:05.142 ]' 00:12:05.142 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:05.142 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:05.142 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:05.142 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:05.142 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:05.142 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:05.142 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:05.142 16:17:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:05.712 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:05.712 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:05.712 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:05.712 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:05.712 16:17:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:08.249 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:08.818 16:17:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.767 ************************************ 00:12:09.767 START TEST filesystem_ext4 00:12:09.767 ************************************ 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:09.767 16:17:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:09.767 mke2fs 1.47.0 (5-Feb-2023) 00:12:09.767 Discarding device blocks: 0/522240 done 00:12:09.767 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:09.767 Filesystem UUID: 022a0754-9eaf-4654-b0b4-aaeef0ddb39f 00:12:09.767 Superblock backups stored on blocks: 00:12:09.767 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:09.767 00:12:09.767 Allocating group tables: 0/64 done 00:12:09.767 Writing inode tables: 0/64 done 00:12:13.058 Creating journal (8192 blocks): done 00:12:14.939 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:12:14.939 00:12:14.939 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:14.939 16:18:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.513 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.513 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:12:21.513 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.513 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:12:21.513 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 163985 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.514 00:12:21.514 real 0m11.423s 00:12:21.514 user 0m0.020s 00:12:21.514 sys 0m0.108s 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:21.514 ************************************ 00:12:21.514 END TEST filesystem_ext4 00:12:21.514 ************************************ 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.514 ************************************ 00:12:21.514 START TEST filesystem_btrfs 00:12:21.514 ************************************ 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:21.514 btrfs-progs v6.8.1 00:12:21.514 See https://btrfs.readthedocs.io for more information. 00:12:21.514 00:12:21.514 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:21.514 NOTE: several default settings have changed in version 5.15, please make sure 00:12:21.514 this does not affect your deployments: 00:12:21.514 - DUP for metadata (-m dup) 00:12:21.514 - enabled no-holes (-O no-holes) 00:12:21.514 - enabled free-space-tree (-R free-space-tree) 00:12:21.514 00:12:21.514 Label: (null) 00:12:21.514 UUID: 55b97d35-8b97-4fad-af31-4b4f77c39073 00:12:21.514 Node size: 16384 00:12:21.514 Sector size: 4096 (CPU page size: 4096) 00:12:21.514 Filesystem size: 510.00MiB 00:12:21.514 Block group profiles: 00:12:21.514 Data: single 8.00MiB 00:12:21.514 Metadata: DUP 32.00MiB 00:12:21.514 System: DUP 8.00MiB 00:12:21.514 SSD detected: yes 00:12:21.514 Zoned device: no 00:12:21.514 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:21.514 Checksum: crc32c 00:12:21.514 Number of devices: 1 00:12:21.514 Devices: 00:12:21.514 ID SIZE PATH 00:12:21.514 1 510.00MiB /dev/nvme0n1p1 00:12:21.514 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:21.514 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:21.774 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 163985 00:12:21.774 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:21.774 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:21.774 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:21.774 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:21.774 00:12:21.774 real 0m0.500s 00:12:21.774 user 0m0.016s 00:12:21.774 sys 0m0.143s 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:21.775 ************************************ 00:12:21.775 END TEST filesystem_btrfs 00:12:21.775 ************************************ 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:21.775 ************************************ 00:12:21.775 START TEST filesystem_xfs 00:12:21.775 ************************************ 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:21.775 16:18:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:21.775 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:21.775 = sectsz=512 attr=2, projid32bit=1 00:12:21.775 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:21.775 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:21.775 data = bsize=4096 blocks=130560, imaxpct=25 00:12:21.775 = sunit=0 swidth=0 blks 00:12:21.775 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:21.775 log =internal log bsize=4096 blocks=16384, version=2 00:12:21.775 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:21.775 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:23.155 Discarding blocks...Done. 00:12:23.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:23.155 16:18:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 163985 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:25.692 00:12:25.692 real 0m3.848s 00:12:25.692 user 0m0.009s 00:12:25.692 sys 0m0.103s 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:25.692 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:25.692 ************************************ 00:12:25.692 END TEST filesystem_xfs 00:12:25.692 ************************************ 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:25.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 163985 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 163985 ']' 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 163985 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 163985 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 163985' 00:12:25.693 killing process with pid 163985 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 163985 00:12:25.693 16:18:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 163985 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:26.259 00:12:26.259 real 0m21.671s 00:12:26.259 user 1m24.106s 00:12:26.259 sys 0m2.725s 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.259 ************************************ 00:12:26.259 END TEST nvmf_filesystem_no_in_capsule 00:12:26.259 ************************************ 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:26.259 ************************************ 00:12:26.259 START TEST nvmf_filesystem_in_capsule 00:12:26.259 ************************************ 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=167375 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 167375 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 167375 ']' 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.259 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.259 [2024-11-19 16:18:16.477455] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:12:26.259 [2024-11-19 16:18:16.477538] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.259 [2024-11-19 16:18:16.546299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.259 [2024-11-19 16:18:16.589241] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.259 [2024-11-19 16:18:16.589301] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.259 [2024-11-19 16:18:16.589330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.259 [2024-11-19 16:18:16.589342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.259 [2024-11-19 16:18:16.589359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.259 [2024-11-19 16:18:16.590804] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.259 [2024-11-19 16:18:16.590914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.259 [2024-11-19 16:18:16.590999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.259 [2024-11-19 16:18:16.591001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.518 [2024-11-19 16:18:16.735028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.518 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.779 Malloc1 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.779 [2024-11-19 16:18:16.918004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:26.779 { 00:12:26.779 "name": "Malloc1", 00:12:26.779 "aliases": [ 00:12:26.779 "5b0fadc9-7064-480a-bd48-e705d5df2c80" 00:12:26.779 ], 00:12:26.779 "product_name": "Malloc disk", 00:12:26.779 "block_size": 512, 00:12:26.779 "num_blocks": 1048576, 00:12:26.779 "uuid": "5b0fadc9-7064-480a-bd48-e705d5df2c80", 00:12:26.779 "assigned_rate_limits": { 00:12:26.779 "rw_ios_per_sec": 0, 00:12:26.779 "rw_mbytes_per_sec": 0, 00:12:26.779 "r_mbytes_per_sec": 0, 00:12:26.779 "w_mbytes_per_sec": 0 00:12:26.779 }, 00:12:26.779 "claimed": true, 00:12:26.779 "claim_type": "exclusive_write", 00:12:26.779 "zoned": false, 00:12:26.779 "supported_io_types": { 00:12:26.779 "read": true, 00:12:26.779 "write": true, 00:12:26.779 "unmap": true, 00:12:26.779 "flush": true, 00:12:26.779 "reset": true, 00:12:26.779 "nvme_admin": false, 00:12:26.779 "nvme_io": false, 00:12:26.779 "nvme_io_md": false, 00:12:26.779 "write_zeroes": true, 00:12:26.779 "zcopy": true, 00:12:26.779 "get_zone_info": false, 00:12:26.779 "zone_management": false, 00:12:26.779 "zone_append": false, 00:12:26.779 "compare": false, 00:12:26.779 "compare_and_write": false, 00:12:26.779 "abort": true, 00:12:26.779 "seek_hole": false, 00:12:26.779 "seek_data": false, 00:12:26.779 "copy": true, 00:12:26.779 "nvme_iov_md": false 00:12:26.779 }, 00:12:26.779 "memory_domains": [ 00:12:26.779 { 00:12:26.779 "dma_device_id": "system", 00:12:26.779 "dma_device_type": 1 00:12:26.779 }, 00:12:26.779 { 00:12:26.779 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.779 "dma_device_type": 2 00:12:26.779 } 00:12:26.779 ], 00:12:26.779 "driver_specific": {} 00:12:26.779 } 00:12:26.779 ]' 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:26.779 16:18:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:26.779 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:26.779 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:26.779 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:26.779 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:26.779 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.717 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.717 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:27.717 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.717 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:27.717 16:18:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:29.626 16:18:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:29.887 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:30.459 16:18:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:31.402 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:31.402 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:31.402 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:31.402 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.402 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:31.663 ************************************ 00:12:31.663 START TEST filesystem_in_capsule_ext4 00:12:31.663 ************************************ 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:31.663 16:18:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:31.663 mke2fs 1.47.0 (5-Feb-2023) 00:12:31.663 Discarding device blocks: 0/522240 done 00:12:31.663 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:31.663 Filesystem UUID: 2cbf8608-ac9c-4f83-8d03-213569ca0454 00:12:31.664 Superblock backups stored on blocks: 00:12:31.664 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:31.664 00:12:31.664 Allocating group tables: 0/64 done 00:12:31.664 Writing inode tables: 0/64 done 00:12:32.235 Creating journal (8192 blocks): done 00:12:34.138 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:12:34.138 00:12:34.138 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:12:34.138 16:18:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 167375 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:40.717 00:12:40.717 real 0m8.506s 00:12:40.717 user 0m0.023s 00:12:40.717 sys 0m0.067s 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:40.717 ************************************ 00:12:40.717 END TEST filesystem_in_capsule_ext4 00:12:40.717 ************************************ 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.717 ************************************ 00:12:40.717 START TEST filesystem_in_capsule_btrfs 00:12:40.717 ************************************ 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:40.717 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:40.717 btrfs-progs v6.8.1 00:12:40.717 See https://btrfs.readthedocs.io for more information. 00:12:40.717 00:12:40.717 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:40.717 NOTE: several default settings have changed in version 5.15, please make sure 00:12:40.717 this does not affect your deployments: 00:12:40.717 - DUP for metadata (-m dup) 00:12:40.717 - enabled no-holes (-O no-holes) 00:12:40.717 - enabled free-space-tree (-R free-space-tree) 00:12:40.717 00:12:40.717 Label: (null) 00:12:40.717 UUID: 1d07a41d-6db1-4926-a745-42d01adca1b4 00:12:40.717 Node size: 16384 00:12:40.717 Sector size: 4096 (CPU page size: 4096) 00:12:40.717 Filesystem size: 510.00MiB 00:12:40.717 Block group profiles: 00:12:40.717 Data: single 8.00MiB 00:12:40.717 Metadata: DUP 32.00MiB 00:12:40.717 System: DUP 8.00MiB 00:12:40.718 SSD detected: yes 00:12:40.718 Zoned device: no 00:12:40.718 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:40.718 Checksum: crc32c 00:12:40.718 Number of devices: 1 00:12:40.718 Devices: 00:12:40.718 ID SIZE PATH 00:12:40.718 1 510.00MiB /dev/nvme0n1p1 00:12:40.718 00:12:40.718 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:40.718 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:40.718 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:40.718 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:40.718 16:18:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:40.718 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:40.718 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:40.718 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:40.718 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 167375 00:12:40.718 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:40.718 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:40.718 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:40.718 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:40.979 00:12:40.979 real 0m0.738s 00:12:40.979 user 0m0.011s 00:12:40.979 sys 0m0.107s 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:40.979 ************************************ 00:12:40.979 END TEST filesystem_in_capsule_btrfs 00:12:40.979 ************************************ 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:40.979 ************************************ 00:12:40.979 START TEST filesystem_in_capsule_xfs 00:12:40.979 ************************************ 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:40.979 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:40.980 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:40.980 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:40.980 16:18:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:40.980 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:40.980 = sectsz=512 attr=2, projid32bit=1 00:12:40.980 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:40.980 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:40.980 data = bsize=4096 blocks=130560, imaxpct=25 00:12:40.980 = sunit=0 swidth=0 blks 00:12:40.980 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:40.980 log =internal log bsize=4096 blocks=16384, version=2 00:12:40.980 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:40.980 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:41.919 Discarding blocks...Done. 00:12:41.919 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:41.919 16:18:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 167375 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:44.464 00:12:44.464 real 0m3.579s 00:12:44.464 user 0m0.024s 00:12:44.464 sys 0m0.055s 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:44.464 ************************************ 00:12:44.464 END TEST filesystem_in_capsule_xfs 00:12:44.464 ************************************ 00:12:44.464 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:44.723 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:44.724 16:18:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.984 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.984 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 167375 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 167375 ']' 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 167375 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 167375 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 167375' 00:12:44.985 killing process with pid 167375 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 167375 00:12:44.985 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 167375 00:12:45.245 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:45.245 00:12:45.245 real 0m19.159s 00:12:45.245 user 1m14.538s 00:12:45.245 sys 0m2.155s 00:12:45.245 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:45.245 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:45.245 ************************************ 00:12:45.245 END TEST nvmf_filesystem_in_capsule 00:12:45.245 ************************************ 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:45.506 rmmod nvme_tcp 00:12:45.506 rmmod nvme_fabrics 00:12:45.506 rmmod nvme_keyring 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:45.506 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.507 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.507 16:18:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.416 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:47.416 00:12:47.416 real 0m45.858s 00:12:47.416 user 2m39.793s 00:12:47.416 sys 0m6.743s 00:12:47.416 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.416 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:47.416 ************************************ 00:12:47.416 END TEST nvmf_filesystem 00:12:47.416 ************************************ 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.677 ************************************ 00:12:47.677 START TEST nvmf_target_discovery 00:12:47.677 ************************************ 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:47.677 * Looking for test storage... 00:12:47.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.677 --rc genhtml_branch_coverage=1 00:12:47.677 --rc genhtml_function_coverage=1 00:12:47.677 --rc genhtml_legend=1 00:12:47.677 --rc geninfo_all_blocks=1 00:12:47.677 --rc geninfo_unexecuted_blocks=1 00:12:47.677 00:12:47.677 ' 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.677 --rc genhtml_branch_coverage=1 00:12:47.677 --rc genhtml_function_coverage=1 00:12:47.677 --rc genhtml_legend=1 00:12:47.677 --rc geninfo_all_blocks=1 00:12:47.677 --rc geninfo_unexecuted_blocks=1 00:12:47.677 00:12:47.677 ' 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.677 --rc genhtml_branch_coverage=1 00:12:47.677 --rc genhtml_function_coverage=1 00:12:47.677 --rc genhtml_legend=1 00:12:47.677 --rc geninfo_all_blocks=1 00:12:47.677 --rc geninfo_unexecuted_blocks=1 00:12:47.677 00:12:47.677 ' 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:47.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.677 --rc genhtml_branch_coverage=1 00:12:47.677 --rc genhtml_function_coverage=1 00:12:47.677 --rc genhtml_legend=1 00:12:47.677 --rc geninfo_all_blocks=1 00:12:47.677 --rc geninfo_unexecuted_blocks=1 00:12:47.677 00:12:47.677 ' 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.677 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:47.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:47.678 16:18:37 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:50.219 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:50.219 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:50.219 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:50.219 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:50.219 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:50.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:50.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:12:50.220 00:12:50.220 --- 10.0.0.2 ping statistics --- 00:12:50.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.220 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:50.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:50.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:12:50.220 00:12:50.220 --- 10.0.0.1 ping statistics --- 00:12:50.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:50.220 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=171796 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 171796 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 171796 ']' 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.220 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.220 [2024-11-19 16:18:40.402833] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:12:50.220 [2024-11-19 16:18:40.402934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.220 [2024-11-19 16:18:40.479861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:50.220 [2024-11-19 16:18:40.531163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:50.220 [2024-11-19 16:18:40.531223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:50.220 [2024-11-19 16:18:40.531252] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:50.220 [2024-11-19 16:18:40.531264] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:50.220 [2024-11-19 16:18:40.531274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:50.220 [2024-11-19 16:18:40.532927] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.220 [2024-11-19 16:18:40.532993] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:50.220 [2024-11-19 16:18:40.533063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:50.220 [2024-11-19 16:18:40.533065] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 [2024-11-19 16:18:40.679516] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 Null1 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 [2024-11-19 16:18:40.723860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 Null2 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 Null3 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 Null4 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.481 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.741 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.741 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:50.741 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.741 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.741 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.741 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:50.741 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.741 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.741 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.741 16:18:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:50.741 00:12:50.741 Discovery Log Number of Records 6, Generation counter 6 00:12:50.741 =====Discovery Log Entry 0====== 00:12:50.741 trtype: tcp 00:12:50.741 adrfam: ipv4 00:12:50.741 subtype: current discovery subsystem 00:12:50.741 treq: not required 00:12:50.741 portid: 0 00:12:50.741 trsvcid: 4420 00:12:50.741 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:50.741 traddr: 10.0.0.2 00:12:50.741 eflags: explicit discovery connections, duplicate discovery information 00:12:50.741 sectype: none 00:12:50.741 =====Discovery Log Entry 1====== 00:12:50.741 trtype: tcp 00:12:50.741 adrfam: ipv4 00:12:50.741 subtype: nvme subsystem 00:12:50.741 treq: not required 00:12:50.741 portid: 0 00:12:50.741 trsvcid: 4420 00:12:50.741 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:50.741 traddr: 10.0.0.2 00:12:50.741 eflags: none 00:12:50.741 sectype: none 00:12:50.741 =====Discovery Log Entry 2====== 00:12:50.741 trtype: tcp 00:12:50.741 adrfam: ipv4 00:12:50.741 subtype: nvme subsystem 00:12:50.741 treq: not required 00:12:50.741 portid: 0 00:12:50.741 trsvcid: 4420 00:12:50.741 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:50.741 traddr: 10.0.0.2 00:12:50.741 eflags: none 00:12:50.741 sectype: none 00:12:50.741 =====Discovery Log Entry 3====== 00:12:50.741 trtype: tcp 00:12:50.741 adrfam: ipv4 00:12:50.741 subtype: nvme subsystem 00:12:50.742 treq: not required 00:12:50.742 portid: 0 00:12:50.742 trsvcid: 4420 00:12:50.742 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:50.742 traddr: 10.0.0.2 00:12:50.742 eflags: none 00:12:50.742 sectype: none 00:12:50.742 =====Discovery Log Entry 4====== 00:12:50.742 trtype: tcp 00:12:50.742 adrfam: ipv4 00:12:50.742 subtype: nvme subsystem 00:12:50.742 treq: not required 00:12:50.742 portid: 0 00:12:50.742 trsvcid: 4420 00:12:50.742 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:50.742 traddr: 10.0.0.2 00:12:50.742 eflags: none 00:12:50.742 sectype: none 00:12:50.742 =====Discovery Log Entry 5====== 00:12:50.742 trtype: tcp 00:12:50.742 adrfam: ipv4 00:12:50.742 subtype: discovery subsystem referral 00:12:50.742 treq: not required 00:12:50.742 portid: 0 00:12:50.742 trsvcid: 4430 00:12:50.742 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:50.742 traddr: 10.0.0.2 00:12:50.742 eflags: none 00:12:50.742 sectype: none 00:12:50.742 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:50.742 Perform nvmf subsystem discovery via RPC 00:12:50.742 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:50.742 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.742 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:50.742 [ 00:12:50.742 { 00:12:50.742 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:50.742 "subtype": "Discovery", 00:12:50.742 "listen_addresses": [ 00:12:50.742 { 00:12:50.742 "trtype": "TCP", 00:12:50.742 "adrfam": "IPv4", 00:12:50.742 "traddr": "10.0.0.2", 00:12:50.742 "trsvcid": "4420" 00:12:50.742 } 00:12:50.742 ], 00:12:50.742 "allow_any_host": true, 00:12:50.742 "hosts": [] 00:12:50.742 }, 00:12:50.742 { 00:12:50.742 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.742 "subtype": "NVMe", 00:12:50.742 "listen_addresses": [ 00:12:50.742 { 00:12:50.742 "trtype": "TCP", 00:12:50.742 "adrfam": "IPv4", 00:12:50.742 "traddr": "10.0.0.2", 00:12:50.742 "trsvcid": "4420" 00:12:50.742 } 00:12:50.742 ], 00:12:50.742 "allow_any_host": true, 00:12:50.742 "hosts": [], 00:12:50.742 "serial_number": "SPDK00000000000001", 00:12:50.742 "model_number": "SPDK bdev Controller", 00:12:50.742 "max_namespaces": 32, 00:12:50.742 "min_cntlid": 1, 00:12:50.742 "max_cntlid": 65519, 00:12:50.742 "namespaces": [ 00:12:50.742 { 00:12:50.742 "nsid": 1, 00:12:50.742 "bdev_name": "Null1", 00:12:50.742 "name": "Null1", 00:12:50.742 "nguid": "6C7F043AEF4F41C691E0E65569F8FFC8", 00:12:50.742 "uuid": "6c7f043a-ef4f-41c6-91e0-e65569f8ffc8" 00:12:50.742 } 00:12:50.742 ] 00:12:50.742 }, 00:12:50.742 { 00:12:50.742 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:50.742 "subtype": "NVMe", 00:12:50.742 "listen_addresses": [ 00:12:50.742 { 00:12:50.742 "trtype": "TCP", 00:12:50.742 "adrfam": "IPv4", 00:12:50.742 "traddr": "10.0.0.2", 00:12:50.742 "trsvcid": "4420" 00:12:50.742 } 00:12:50.742 ], 00:12:50.742 "allow_any_host": true, 00:12:50.742 "hosts": [], 00:12:50.742 "serial_number": "SPDK00000000000002", 00:12:50.742 "model_number": "SPDK bdev Controller", 00:12:50.742 "max_namespaces": 32, 00:12:50.742 "min_cntlid": 1, 00:12:50.742 "max_cntlid": 65519, 00:12:50.742 "namespaces": [ 00:12:50.742 { 00:12:50.742 "nsid": 1, 00:12:50.742 "bdev_name": "Null2", 00:12:50.742 "name": "Null2", 00:12:50.742 "nguid": "4B8091FB1AD3421FB70FAAA02BEBAAB5", 00:12:50.742 "uuid": "4b8091fb-1ad3-421f-b70f-aaa02bebaab5" 00:12:50.742 } 00:12:50.742 ] 00:12:50.742 }, 00:12:50.742 { 00:12:50.742 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:50.742 "subtype": "NVMe", 00:12:50.742 "listen_addresses": [ 00:12:50.742 { 00:12:50.742 "trtype": "TCP", 00:12:50.742 "adrfam": "IPv4", 00:12:50.742 "traddr": "10.0.0.2", 00:12:50.742 "trsvcid": "4420" 00:12:50.742 } 00:12:50.742 ], 00:12:50.742 "allow_any_host": true, 00:12:50.742 "hosts": [], 00:12:50.742 "serial_number": "SPDK00000000000003", 00:12:50.742 "model_number": "SPDK bdev Controller", 00:12:50.742 "max_namespaces": 32, 00:12:50.742 "min_cntlid": 1, 00:12:50.742 "max_cntlid": 65519, 00:12:50.742 "namespaces": [ 00:12:50.742 { 00:12:50.742 "nsid": 1, 00:12:50.742 "bdev_name": "Null3", 00:12:50.742 "name": "Null3", 00:12:50.742 "nguid": "DF15249FBE214123A8E07FD63E73ED96", 00:12:50.742 "uuid": "df15249f-be21-4123-a8e0-7fd63e73ed96" 00:12:50.742 } 00:12:50.742 ] 00:12:50.742 }, 00:12:50.742 { 00:12:50.742 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:50.742 "subtype": "NVMe", 00:12:50.742 "listen_addresses": [ 00:12:50.742 { 00:12:50.742 "trtype": "TCP", 00:12:50.742 "adrfam": "IPv4", 00:12:50.742 "traddr": "10.0.0.2", 00:12:50.742 "trsvcid": "4420" 00:12:50.742 } 00:12:50.742 ], 00:12:50.742 "allow_any_host": true, 00:12:50.742 "hosts": [], 00:12:50.742 "serial_number": "SPDK00000000000004", 00:12:50.742 "model_number": "SPDK bdev Controller", 00:12:50.742 "max_namespaces": 32, 00:12:50.742 "min_cntlid": 1, 00:12:50.742 "max_cntlid": 65519, 00:12:50.742 "namespaces": [ 00:12:50.742 { 00:12:50.742 "nsid": 1, 00:12:50.742 "bdev_name": "Null4", 00:12:50.742 "name": "Null4", 00:12:50.742 "nguid": "E582DADE9C60419ABB566DBF06F038D3", 00:12:50.742 "uuid": "e582dade-9c60-419a-bb56-6dbf06f038d3" 00:12:50.742 } 00:12:50.742 ] 00:12:50.742 } 00:12:50.742 ] 00:12:50.742 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:50.742 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:50.742 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:50.742 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.742 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:50.742 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:51.004 rmmod nvme_tcp 00:12:51.004 rmmod nvme_fabrics 00:12:51.004 rmmod nvme_keyring 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 171796 ']' 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 171796 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 171796 ']' 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 171796 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 171796 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 171796' 00:12:51.004 killing process with pid 171796 00:12:51.004 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 171796 00:12:51.005 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 171796 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:51.264 16:18:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.177 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:53.437 00:12:53.437 real 0m5.735s 00:12:53.437 user 0m4.769s 00:12:53.437 sys 0m2.050s 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:53.437 ************************************ 00:12:53.437 END TEST nvmf_target_discovery 00:12:53.437 ************************************ 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:53.437 ************************************ 00:12:53.437 START TEST nvmf_referrals 00:12:53.437 ************************************ 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:53.437 * Looking for test storage... 00:12:53.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:53.437 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.438 --rc genhtml_branch_coverage=1 00:12:53.438 --rc genhtml_function_coverage=1 00:12:53.438 --rc genhtml_legend=1 00:12:53.438 --rc geninfo_all_blocks=1 00:12:53.438 --rc geninfo_unexecuted_blocks=1 00:12:53.438 00:12:53.438 ' 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.438 --rc genhtml_branch_coverage=1 00:12:53.438 --rc genhtml_function_coverage=1 00:12:53.438 --rc genhtml_legend=1 00:12:53.438 --rc geninfo_all_blocks=1 00:12:53.438 --rc geninfo_unexecuted_blocks=1 00:12:53.438 00:12:53.438 ' 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.438 --rc genhtml_branch_coverage=1 00:12:53.438 --rc genhtml_function_coverage=1 00:12:53.438 --rc genhtml_legend=1 00:12:53.438 --rc geninfo_all_blocks=1 00:12:53.438 --rc geninfo_unexecuted_blocks=1 00:12:53.438 00:12:53.438 ' 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:53.438 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:53.438 --rc genhtml_branch_coverage=1 00:12:53.438 --rc genhtml_function_coverage=1 00:12:53.438 --rc genhtml_legend=1 00:12:53.438 --rc geninfo_all_blocks=1 00:12:53.438 --rc geninfo_unexecuted_blocks=1 00:12:53.438 00:12:53.438 ' 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:53.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:53.438 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:53.439 16:18:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:55.980 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:55.981 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:55.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:55.981 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:55.981 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:55.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:55.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:12:55.981 00:12:55.981 --- 10.0.0.2 ping statistics --- 00:12:55.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.981 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:55.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:55.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:12:55.981 00:12:55.981 --- 10.0.0.1 ping statistics --- 00:12:55.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:55.981 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=173895 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 173895 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 173895 ']' 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:55.981 16:18:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.981 [2024-11-19 16:18:46.042475] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:12:55.982 [2024-11-19 16:18:46.042563] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.982 [2024-11-19 16:18:46.112436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:55.982 [2024-11-19 16:18:46.157541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:55.982 [2024-11-19 16:18:46.157597] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:55.982 [2024-11-19 16:18:46.157632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:55.982 [2024-11-19 16:18:46.157644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:55.982 [2024-11-19 16:18:46.157653] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:55.982 [2024-11-19 16:18:46.159186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:55.982 [2024-11-19 16:18:46.159254] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:55.982 [2024-11-19 16:18:46.159324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.982 [2024-11-19 16:18:46.159321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.982 [2024-11-19 16:18:46.294173] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:55.982 [2024-11-19 16:18:46.306461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.982 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.240 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:56.498 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:56.758 16:18:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:57.017 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:57.018 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:57.276 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:57.537 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:57.537 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:57.537 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:57.537 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:57.537 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:57.537 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:57.798 16:18:47 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:58.060 rmmod nvme_tcp 00:12:58.060 rmmod nvme_fabrics 00:12:58.060 rmmod nvme_keyring 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 173895 ']' 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 173895 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 173895 ']' 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 173895 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 173895 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 173895' 00:12:58.060 killing process with pid 173895 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 173895 00:12:58.060 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 173895 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:58.322 16:18:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.235 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:00.235 00:13:00.235 real 0m6.950s 00:13:00.235 user 0m10.646s 00:13:00.235 sys 0m2.335s 00:13:00.235 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:00.235 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:00.235 ************************************ 00:13:00.235 END TEST nvmf_referrals 00:13:00.235 ************************************ 00:13:00.235 16:18:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:00.235 16:18:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:00.235 16:18:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:00.235 16:18:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:00.235 ************************************ 00:13:00.236 START TEST nvmf_connect_disconnect 00:13:00.236 ************************************ 00:13:00.236 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:00.495 * Looking for test storage... 00:13:00.495 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:00.495 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:00.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.496 --rc genhtml_branch_coverage=1 00:13:00.496 --rc genhtml_function_coverage=1 00:13:00.496 --rc genhtml_legend=1 00:13:00.496 --rc geninfo_all_blocks=1 00:13:00.496 --rc geninfo_unexecuted_blocks=1 00:13:00.496 00:13:00.496 ' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:00.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.496 --rc genhtml_branch_coverage=1 00:13:00.496 --rc genhtml_function_coverage=1 00:13:00.496 --rc genhtml_legend=1 00:13:00.496 --rc geninfo_all_blocks=1 00:13:00.496 --rc geninfo_unexecuted_blocks=1 00:13:00.496 00:13:00.496 ' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:00.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.496 --rc genhtml_branch_coverage=1 00:13:00.496 --rc genhtml_function_coverage=1 00:13:00.496 --rc genhtml_legend=1 00:13:00.496 --rc geninfo_all_blocks=1 00:13:00.496 --rc geninfo_unexecuted_blocks=1 00:13:00.496 00:13:00.496 ' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:00.496 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:00.496 --rc genhtml_branch_coverage=1 00:13:00.496 --rc genhtml_function_coverage=1 00:13:00.496 --rc genhtml_legend=1 00:13:00.496 --rc geninfo_all_blocks=1 00:13:00.496 --rc geninfo_unexecuted_blocks=1 00:13:00.496 00:13:00.496 ' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:00.496 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:13:00.496 16:18:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.035 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:03.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:03.036 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:03.036 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:03.036 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.036 16:18:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:03.036 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.036 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:13:03.036 00:13:03.036 --- 10.0.0.2 ping statistics --- 00:13:03.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.036 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:03.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:13:03.036 00:13:03.036 --- 10.0.0.1 ping statistics --- 00:13:03.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.036 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=176201 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 176201 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 176201 ']' 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.036 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.036 [2024-11-19 16:18:53.266252] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:13:03.036 [2024-11-19 16:18:53.266341] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.036 [2024-11-19 16:18:53.339430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.296 [2024-11-19 16:18:53.389581] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.296 [2024-11-19 16:18:53.389643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.296 [2024-11-19 16:18:53.389673] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.296 [2024-11-19 16:18:53.389685] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.296 [2024-11-19 16:18:53.389695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.296 [2024-11-19 16:18:53.393095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.296 [2024-11-19 16:18:53.393149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.296 [2024-11-19 16:18:53.393211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.296 [2024-11-19 16:18:53.393215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.296 [2024-11-19 16:18:53.546617] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:13:03.296 [2024-11-19 16:18:53.618849] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:13:03.296 16:18:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:13:05.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:14.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:26.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:28.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.811 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:35.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:42.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.257 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:49.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.245 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:58.661 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.208 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.194 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:19.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.190 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.643 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.181 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.635 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.715 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.164 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.184 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:10.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.637 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.172 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.622 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.071 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.156 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.064 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.046 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.585 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.023 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:03.928 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.963 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.500 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:17.937 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.462 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:22.988 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:24.884 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.417 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:29.955 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:31.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:34.401 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.936 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:38.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:41.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:45.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:48.359 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.266 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:52.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:55.342 rmmod nvme_tcp 00:16:55.342 rmmod nvme_fabrics 00:16:55.342 rmmod nvme_keyring 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 176201 ']' 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 176201 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 176201 ']' 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 176201 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 176201 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 176201' 00:16:55.342 killing process with pid 176201 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 176201 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 176201 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:55.342 16:22:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.886 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:57.887 00:16:57.887 real 3m57.082s 00:16:57.887 user 15m2.121s 00:16:57.887 sys 0m35.923s 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:57.887 ************************************ 00:16:57.887 END TEST nvmf_connect_disconnect 00:16:57.887 ************************************ 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:57.887 ************************************ 00:16:57.887 START TEST nvmf_multitarget 00:16:57.887 ************************************ 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:57.887 * Looking for test storage... 00:16:57.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:57.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.887 --rc genhtml_branch_coverage=1 00:16:57.887 --rc genhtml_function_coverage=1 00:16:57.887 --rc genhtml_legend=1 00:16:57.887 --rc geninfo_all_blocks=1 00:16:57.887 --rc geninfo_unexecuted_blocks=1 00:16:57.887 00:16:57.887 ' 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:57.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.887 --rc genhtml_branch_coverage=1 00:16:57.887 --rc genhtml_function_coverage=1 00:16:57.887 --rc genhtml_legend=1 00:16:57.887 --rc geninfo_all_blocks=1 00:16:57.887 --rc geninfo_unexecuted_blocks=1 00:16:57.887 00:16:57.887 ' 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:57.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.887 --rc genhtml_branch_coverage=1 00:16:57.887 --rc genhtml_function_coverage=1 00:16:57.887 --rc genhtml_legend=1 00:16:57.887 --rc geninfo_all_blocks=1 00:16:57.887 --rc geninfo_unexecuted_blocks=1 00:16:57.887 00:16:57.887 ' 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:57.887 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:57.887 --rc genhtml_branch_coverage=1 00:16:57.887 --rc genhtml_function_coverage=1 00:16:57.887 --rc genhtml_legend=1 00:16:57.887 --rc geninfo_all_blocks=1 00:16:57.887 --rc geninfo_unexecuted_blocks=1 00:16:57.887 00:16:57.887 ' 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.887 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:57.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:57.888 16:22:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:59.794 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:59.794 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:59.794 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:59.794 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:59.794 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:59.795 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:59.795 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:59.795 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:59.795 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:59.795 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:59.795 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:00.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:00.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:17:00.053 00:17:00.053 --- 10.0.0.2 ping statistics --- 00:17:00.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.053 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:00.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:00.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:17:00.053 00:17:00.053 --- 10.0.0.1 ping statistics --- 00:17:00.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:00.053 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:17:00.053 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=207431 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 207431 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 207431 ']' 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.054 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.054 [2024-11-19 16:22:50.275637] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:17:00.054 [2024-11-19 16:22:50.275730] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.054 [2024-11-19 16:22:50.349600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:00.312 [2024-11-19 16:22:50.395293] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.312 [2024-11-19 16:22:50.395358] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.312 [2024-11-19 16:22:50.395371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:00.312 [2024-11-19 16:22:50.395382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:00.312 [2024-11-19 16:22:50.395392] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.312 [2024-11-19 16:22:50.397016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:00.312 [2024-11-19 16:22:50.397094] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:00.312 [2024-11-19 16:22:50.397154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:00.313 [2024-11-19 16:22:50.397157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.313 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:00.313 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:17:00.313 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:00.313 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:00.313 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:00.313 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.313 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:00.313 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:00.313 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:17:00.571 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:17:00.571 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:17:00.571 "nvmf_tgt_1" 00:17:00.571 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:17:00.830 "nvmf_tgt_2" 00:17:00.830 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:00.830 16:22:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:17:00.830 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:17:00.830 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:17:00.830 true 00:17:01.088 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:17:01.088 true 00:17:01.088 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:17:01.088 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:01.347 rmmod nvme_tcp 00:17:01.347 rmmod nvme_fabrics 00:17:01.347 rmmod nvme_keyring 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 207431 ']' 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 207431 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 207431 ']' 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 207431 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 207431 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 207431' 00:17:01.347 killing process with pid 207431 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 207431 00:17:01.347 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 207431 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.607 16:22:51 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:03.514 00:17:03.514 real 0m6.033s 00:17:03.514 user 0m6.929s 00:17:03.514 sys 0m2.097s 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:17:03.514 ************************************ 00:17:03.514 END TEST nvmf_multitarget 00:17:03.514 ************************************ 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:03.514 ************************************ 00:17:03.514 START TEST nvmf_rpc 00:17:03.514 ************************************ 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:17:03.514 * Looking for test storage... 00:17:03.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:17:03.514 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:03.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.773 --rc genhtml_branch_coverage=1 00:17:03.773 --rc genhtml_function_coverage=1 00:17:03.773 --rc genhtml_legend=1 00:17:03.773 --rc geninfo_all_blocks=1 00:17:03.773 --rc geninfo_unexecuted_blocks=1 00:17:03.773 00:17:03.773 ' 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:03.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.773 --rc genhtml_branch_coverage=1 00:17:03.773 --rc genhtml_function_coverage=1 00:17:03.773 --rc genhtml_legend=1 00:17:03.773 --rc geninfo_all_blocks=1 00:17:03.773 --rc geninfo_unexecuted_blocks=1 00:17:03.773 00:17:03.773 ' 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:03.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.773 --rc genhtml_branch_coverage=1 00:17:03.773 --rc genhtml_function_coverage=1 00:17:03.773 --rc genhtml_legend=1 00:17:03.773 --rc geninfo_all_blocks=1 00:17:03.773 --rc geninfo_unexecuted_blocks=1 00:17:03.773 00:17:03.773 ' 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:03.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:03.773 --rc genhtml_branch_coverage=1 00:17:03.773 --rc genhtml_function_coverage=1 00:17:03.773 --rc genhtml_legend=1 00:17:03.773 --rc geninfo_all_blocks=1 00:17:03.773 --rc geninfo_unexecuted_blocks=1 00:17:03.773 00:17:03.773 ' 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.773 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:03.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:17:03.774 16:22:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:06.310 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:06.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:06.310 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:06.311 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:06.311 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:06.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:17:06.311 00:17:06.311 --- 10.0.0.2 ping statistics --- 00:17:06.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.311 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:17:06.311 00:17:06.311 --- 10.0.0.1 ping statistics --- 00:17:06.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.311 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=209541 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 209541 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 209541 ']' 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.311 [2024-11-19 16:22:56.338984] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:17:06.311 [2024-11-19 16:22:56.339088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:06.311 [2024-11-19 16:22:56.419401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.311 [2024-11-19 16:22:56.468990] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.311 [2024-11-19 16:22:56.469045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.311 [2024-11-19 16:22:56.469080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.311 [2024-11-19 16:22:56.469092] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.311 [2024-11-19 16:22:56.469101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.311 [2024-11-19 16:22:56.470665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.311 [2024-11-19 16:22:56.470732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.311 [2024-11-19 16:22:56.470754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:06.311 [2024-11-19 16:22:56.470758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.311 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:17:06.311 "tick_rate": 2700000000, 00:17:06.311 "poll_groups": [ 00:17:06.311 { 00:17:06.311 "name": "nvmf_tgt_poll_group_000", 00:17:06.311 "admin_qpairs": 0, 00:17:06.311 "io_qpairs": 0, 00:17:06.311 "current_admin_qpairs": 0, 00:17:06.311 "current_io_qpairs": 0, 00:17:06.311 "pending_bdev_io": 0, 00:17:06.311 "completed_nvme_io": 0, 00:17:06.311 "transports": [] 00:17:06.311 }, 00:17:06.311 { 00:17:06.311 "name": "nvmf_tgt_poll_group_001", 00:17:06.311 "admin_qpairs": 0, 00:17:06.311 "io_qpairs": 0, 00:17:06.311 "current_admin_qpairs": 0, 00:17:06.311 "current_io_qpairs": 0, 00:17:06.312 "pending_bdev_io": 0, 00:17:06.312 "completed_nvme_io": 0, 00:17:06.312 "transports": [] 00:17:06.312 }, 00:17:06.312 { 00:17:06.312 "name": "nvmf_tgt_poll_group_002", 00:17:06.312 "admin_qpairs": 0, 00:17:06.312 "io_qpairs": 0, 00:17:06.312 "current_admin_qpairs": 0, 00:17:06.312 "current_io_qpairs": 0, 00:17:06.312 "pending_bdev_io": 0, 00:17:06.312 "completed_nvme_io": 0, 00:17:06.312 "transports": [] 00:17:06.312 }, 00:17:06.312 { 00:17:06.312 "name": "nvmf_tgt_poll_group_003", 00:17:06.312 "admin_qpairs": 0, 00:17:06.312 "io_qpairs": 0, 00:17:06.312 "current_admin_qpairs": 0, 00:17:06.312 "current_io_qpairs": 0, 00:17:06.312 "pending_bdev_io": 0, 00:17:06.312 "completed_nvme_io": 0, 00:17:06.312 "transports": [] 00:17:06.312 } 00:17:06.312 ] 00:17:06.312 }' 00:17:06.312 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:17:06.312 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:17:06.312 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:17:06.312 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.571 [2024-11-19 16:22:56.719300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:17:06.571 "tick_rate": 2700000000, 00:17:06.571 "poll_groups": [ 00:17:06.571 { 00:17:06.571 "name": "nvmf_tgt_poll_group_000", 00:17:06.571 "admin_qpairs": 0, 00:17:06.571 "io_qpairs": 0, 00:17:06.571 "current_admin_qpairs": 0, 00:17:06.571 "current_io_qpairs": 0, 00:17:06.571 "pending_bdev_io": 0, 00:17:06.571 "completed_nvme_io": 0, 00:17:06.571 "transports": [ 00:17:06.571 { 00:17:06.571 "trtype": "TCP" 00:17:06.571 } 00:17:06.571 ] 00:17:06.571 }, 00:17:06.571 { 00:17:06.571 "name": "nvmf_tgt_poll_group_001", 00:17:06.571 "admin_qpairs": 0, 00:17:06.571 "io_qpairs": 0, 00:17:06.571 "current_admin_qpairs": 0, 00:17:06.571 "current_io_qpairs": 0, 00:17:06.571 "pending_bdev_io": 0, 00:17:06.571 "completed_nvme_io": 0, 00:17:06.571 "transports": [ 00:17:06.571 { 00:17:06.571 "trtype": "TCP" 00:17:06.571 } 00:17:06.571 ] 00:17:06.571 }, 00:17:06.571 { 00:17:06.571 "name": "nvmf_tgt_poll_group_002", 00:17:06.571 "admin_qpairs": 0, 00:17:06.571 "io_qpairs": 0, 00:17:06.571 "current_admin_qpairs": 0, 00:17:06.571 "current_io_qpairs": 0, 00:17:06.571 "pending_bdev_io": 0, 00:17:06.571 "completed_nvme_io": 0, 00:17:06.571 "transports": [ 00:17:06.571 { 00:17:06.571 "trtype": "TCP" 00:17:06.571 } 00:17:06.571 ] 00:17:06.571 }, 00:17:06.571 { 00:17:06.571 "name": "nvmf_tgt_poll_group_003", 00:17:06.571 "admin_qpairs": 0, 00:17:06.571 "io_qpairs": 0, 00:17:06.571 "current_admin_qpairs": 0, 00:17:06.571 "current_io_qpairs": 0, 00:17:06.571 "pending_bdev_io": 0, 00:17:06.571 "completed_nvme_io": 0, 00:17:06.571 "transports": [ 00:17:06.571 { 00:17:06.571 "trtype": "TCP" 00:17:06.571 } 00:17:06.571 ] 00:17:06.571 } 00:17:06.571 ] 00:17:06.571 }' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.571 Malloc1 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.571 [2024-11-19 16:22:56.895689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:06.571 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:17:06.831 [2024-11-19 16:22:56.918308] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:06.831 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:06.831 could not add new controller: failed to write to nvme-fabrics device 00:17:06.831 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:06.831 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:06.831 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:06.831 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:06.831 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:06.831 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.831 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:06.831 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.831 16:22:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:07.399 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:17:07.399 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:07.399 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:07.399 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:07.399 16:22:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:09.305 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:09.305 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:09.305 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:09.305 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:09.305 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:09.305 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:09.305 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:09.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:09.566 [2024-11-19 16:22:59.698628] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:17:09.566 Failed to write to /dev/nvme-fabrics: Input/output error 00:17:09.566 could not add new controller: failed to write to nvme-fabrics device 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.566 16:22:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:10.133 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:17:10.133 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:10.133 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:10.133 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:10.133 16:23:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:12.040 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:12.040 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:12.040 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:12.040 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:12.040 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:12.040 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:12.040 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:12.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.298 [2024-11-19 16:23:02.439217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.298 16:23:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:12.866 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:12.866 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:12.866 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:12.866 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:12.866 16:23:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:14.772 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:14.772 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:14.772 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:14.772 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:14.772 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:14.772 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:14.772 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:15.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.032 [2024-11-19 16:23:05.195590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:15.032 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.033 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:15.033 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.033 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.033 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.033 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:15.033 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.033 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:15.033 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.033 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:15.604 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:15.604 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:15.604 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:15.604 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:15.604 16:23:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:18.137 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.137 16:23:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.137 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.137 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:18.137 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:18.137 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.138 [2024-11-19 16:23:08.016344] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.138 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:18.398 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:18.398 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:18.398 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:18.398 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:18.398 16:23:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:20.302 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:20.302 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:20.302 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:20.561 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.561 [2024-11-19 16:23:10.754079] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:20.561 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.562 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.562 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.562 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:20.562 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.562 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:20.562 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.562 16:23:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:21.130 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:21.130 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:21.130 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:21.130 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:21.130 16:23:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:23.037 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.297 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.297 [2024-11-19 16:23:13.529981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.297 16:23:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:24.233 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:24.233 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:24.234 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:24.234 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:24.234 16:23:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:26.143 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:26.143 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:26.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 [2024-11-19 16:23:16.363852] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 [2024-11-19 16:23:16.411886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 [2024-11-19 16:23:16.460029] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.144 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 [2024-11-19 16:23:16.508241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 [2024-11-19 16:23:16.556426] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.404 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:26.405 "tick_rate": 2700000000, 00:17:26.405 "poll_groups": [ 00:17:26.405 { 00:17:26.405 "name": "nvmf_tgt_poll_group_000", 00:17:26.405 "admin_qpairs": 2, 00:17:26.405 "io_qpairs": 84, 00:17:26.405 "current_admin_qpairs": 0, 00:17:26.405 "current_io_qpairs": 0, 00:17:26.405 "pending_bdev_io": 0, 00:17:26.405 "completed_nvme_io": 210, 00:17:26.405 "transports": [ 00:17:26.405 { 00:17:26.405 "trtype": "TCP" 00:17:26.405 } 00:17:26.405 ] 00:17:26.405 }, 00:17:26.405 { 00:17:26.405 "name": "nvmf_tgt_poll_group_001", 00:17:26.405 "admin_qpairs": 2, 00:17:26.405 "io_qpairs": 84, 00:17:26.405 "current_admin_qpairs": 0, 00:17:26.405 "current_io_qpairs": 0, 00:17:26.405 "pending_bdev_io": 0, 00:17:26.405 "completed_nvme_io": 169, 00:17:26.405 "transports": [ 00:17:26.405 { 00:17:26.405 "trtype": "TCP" 00:17:26.405 } 00:17:26.405 ] 00:17:26.405 }, 00:17:26.405 { 00:17:26.405 "name": "nvmf_tgt_poll_group_002", 00:17:26.405 "admin_qpairs": 1, 00:17:26.405 "io_qpairs": 84, 00:17:26.405 "current_admin_qpairs": 0, 00:17:26.405 "current_io_qpairs": 0, 00:17:26.405 "pending_bdev_io": 0, 00:17:26.405 "completed_nvme_io": 157, 00:17:26.405 "transports": [ 00:17:26.405 { 00:17:26.405 "trtype": "TCP" 00:17:26.405 } 00:17:26.405 ] 00:17:26.405 }, 00:17:26.405 { 00:17:26.405 "name": "nvmf_tgt_poll_group_003", 00:17:26.405 "admin_qpairs": 2, 00:17:26.405 "io_qpairs": 84, 00:17:26.405 "current_admin_qpairs": 0, 00:17:26.405 "current_io_qpairs": 0, 00:17:26.405 "pending_bdev_io": 0, 00:17:26.405 "completed_nvme_io": 150, 00:17:26.405 "transports": [ 00:17:26.405 { 00:17:26.405 "trtype": "TCP" 00:17:26.405 } 00:17:26.405 ] 00:17:26.405 } 00:17:26.405 ] 00:17:26.405 }' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:26.405 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:26.405 rmmod nvme_tcp 00:17:26.405 rmmod nvme_fabrics 00:17:26.405 rmmod nvme_keyring 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 209541 ']' 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 209541 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 209541 ']' 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 209541 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 209541 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 209541' 00:17:26.665 killing process with pid 209541 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 209541 00:17:26.665 16:23:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 209541 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:26.926 16:23:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:28.831 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:28.831 00:17:28.831 real 0m25.292s 00:17:28.831 user 1m21.806s 00:17:28.831 sys 0m4.183s 00:17:28.831 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:28.831 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:28.831 ************************************ 00:17:28.831 END TEST nvmf_rpc 00:17:28.831 ************************************ 00:17:28.831 16:23:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:28.831 16:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:28.831 16:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:28.831 16:23:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:28.831 ************************************ 00:17:28.831 START TEST nvmf_invalid 00:17:28.831 ************************************ 00:17:28.831 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:29.094 * Looking for test storage... 00:17:29.094 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:29.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.094 --rc genhtml_branch_coverage=1 00:17:29.094 --rc genhtml_function_coverage=1 00:17:29.094 --rc genhtml_legend=1 00:17:29.094 --rc geninfo_all_blocks=1 00:17:29.094 --rc geninfo_unexecuted_blocks=1 00:17:29.094 00:17:29.094 ' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:29.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.094 --rc genhtml_branch_coverage=1 00:17:29.094 --rc genhtml_function_coverage=1 00:17:29.094 --rc genhtml_legend=1 00:17:29.094 --rc geninfo_all_blocks=1 00:17:29.094 --rc geninfo_unexecuted_blocks=1 00:17:29.094 00:17:29.094 ' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:29.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.094 --rc genhtml_branch_coverage=1 00:17:29.094 --rc genhtml_function_coverage=1 00:17:29.094 --rc genhtml_legend=1 00:17:29.094 --rc geninfo_all_blocks=1 00:17:29.094 --rc geninfo_unexecuted_blocks=1 00:17:29.094 00:17:29.094 ' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:29.094 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:29.094 --rc genhtml_branch_coverage=1 00:17:29.094 --rc genhtml_function_coverage=1 00:17:29.094 --rc genhtml_legend=1 00:17:29.094 --rc geninfo_all_blocks=1 00:17:29.094 --rc geninfo_unexecuted_blocks=1 00:17:29.094 00:17:29.094 ' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:29.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:29.094 16:23:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:31.630 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:31.630 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:31.630 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:31.630 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:31.630 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:31.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:31.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:17:31.631 00:17:31.631 --- 10.0.0.2 ping statistics --- 00:17:31.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.631 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:31.631 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:31.631 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:17:31.631 00:17:31.631 --- 10.0.0.1 ping statistics --- 00:17:31.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:31.631 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=214032 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 214032 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 214032 ']' 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:31.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 [2024-11-19 16:23:21.679184] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:17:31.631 [2024-11-19 16:23:21.679263] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:31.631 [2024-11-19 16:23:21.750810] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:31.631 [2024-11-19 16:23:21.795529] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:31.631 [2024-11-19 16:23:21.795578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:31.631 [2024-11-19 16:23:21.795606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:31.631 [2024-11-19 16:23:21.795617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:31.631 [2024-11-19 16:23:21.795627] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:31.631 [2024-11-19 16:23:21.797211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:31.631 [2024-11-19 16:23:21.797295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:31.631 [2024-11-19 16:23:21.797298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.631 [2024-11-19 16:23:21.797235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:31.631 16:23:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode24497 00:17:32.198 [2024-11-19 16:23:22.233462] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:32.198 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:32.198 { 00:17:32.198 "nqn": "nqn.2016-06.io.spdk:cnode24497", 00:17:32.198 "tgt_name": "foobar", 00:17:32.198 "method": "nvmf_create_subsystem", 00:17:32.198 "req_id": 1 00:17:32.198 } 00:17:32.198 Got JSON-RPC error response 00:17:32.198 response: 00:17:32.198 { 00:17:32.198 "code": -32603, 00:17:32.198 "message": "Unable to find target foobar" 00:17:32.198 }' 00:17:32.198 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:32.198 { 00:17:32.198 "nqn": "nqn.2016-06.io.spdk:cnode24497", 00:17:32.198 "tgt_name": "foobar", 00:17:32.198 "method": "nvmf_create_subsystem", 00:17:32.198 "req_id": 1 00:17:32.198 } 00:17:32.198 Got JSON-RPC error response 00:17:32.198 response: 00:17:32.198 { 00:17:32.198 "code": -32603, 00:17:32.199 "message": "Unable to find target foobar" 00:17:32.199 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:32.199 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:32.199 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11567 00:17:32.199 [2024-11-19 16:23:22.506373] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11567: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:32.199 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:32.199 { 00:17:32.199 "nqn": "nqn.2016-06.io.spdk:cnode11567", 00:17:32.199 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:32.199 "method": "nvmf_create_subsystem", 00:17:32.199 "req_id": 1 00:17:32.199 } 00:17:32.199 Got JSON-RPC error response 00:17:32.199 response: 00:17:32.199 { 00:17:32.199 "code": -32602, 00:17:32.199 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:32.199 }' 00:17:32.199 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:32.199 { 00:17:32.199 "nqn": "nqn.2016-06.io.spdk:cnode11567", 00:17:32.199 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:32.199 "method": "nvmf_create_subsystem", 00:17:32.199 "req_id": 1 00:17:32.199 } 00:17:32.199 Got JSON-RPC error response 00:17:32.199 response: 00:17:32.199 { 00:17:32.199 "code": -32602, 00:17:32.199 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:32.199 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:32.199 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:32.199 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode6511 00:17:32.459 [2024-11-19 16:23:22.775210] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6511: invalid model number 'SPDK_Controller' 00:17:32.459 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:32.459 { 00:17:32.459 "nqn": "nqn.2016-06.io.spdk:cnode6511", 00:17:32.459 "model_number": "SPDK_Controller\u001f", 00:17:32.459 "method": "nvmf_create_subsystem", 00:17:32.459 "req_id": 1 00:17:32.459 } 00:17:32.459 Got JSON-RPC error response 00:17:32.459 response: 00:17:32.459 { 00:17:32.459 "code": -32602, 00:17:32.459 "message": "Invalid MN SPDK_Controller\u001f" 00:17:32.459 }' 00:17:32.459 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:32.459 { 00:17:32.459 "nqn": "nqn.2016-06.io.spdk:cnode6511", 00:17:32.459 "model_number": "SPDK_Controller\u001f", 00:17:32.459 "method": "nvmf_create_subsystem", 00:17:32.459 "req_id": 1 00:17:32.459 } 00:17:32.459 Got JSON-RPC error response 00:17:32.459 response: 00:17:32.459 { 00:17:32.459 "code": -32602, 00:17:32.459 "message": "Invalid MN SPDK_Controller\u001f" 00:17:32.459 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.720 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ 7 == \- ]] 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '7Kl{`"[hwcU59Qqb+ 5A]' 00:17:32.721 16:23:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '7Kl{`"[hwcU59Qqb+ 5A]' nqn.2016-06.io.spdk:cnode26405 00:17:32.981 [2024-11-19 16:23:23.160539] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26405: invalid serial number '7Kl{`"[hwcU59Qqb+ 5A]' 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:32.981 { 00:17:32.981 "nqn": "nqn.2016-06.io.spdk:cnode26405", 00:17:32.981 "serial_number": "7Kl{`\"[hwcU59Qqb+ 5A]", 00:17:32.981 "method": "nvmf_create_subsystem", 00:17:32.981 "req_id": 1 00:17:32.981 } 00:17:32.981 Got JSON-RPC error response 00:17:32.981 response: 00:17:32.981 { 00:17:32.981 "code": -32602, 00:17:32.981 "message": "Invalid SN 7Kl{`\"[hwcU59Qqb+ 5A]" 00:17:32.981 }' 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:32.981 { 00:17:32.981 "nqn": "nqn.2016-06.io.spdk:cnode26405", 00:17:32.981 "serial_number": "7Kl{`\"[hwcU59Qqb+ 5A]", 00:17:32.981 "method": "nvmf_create_subsystem", 00:17:32.981 "req_id": 1 00:17:32.981 } 00:17:32.981 Got JSON-RPC error response 00:17:32.981 response: 00:17:32.981 { 00:17:32.981 "code": -32602, 00:17:32.981 "message": "Invalid SN 7Kl{`\"[hwcU59Qqb+ 5A]" 00:17:32.981 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.981 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:32.982 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ f == \- ]] 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'feRvv?m4XsLtO/iN8)|QbS1Xva})[l0#Yfr)k;%Z3' 00:17:32.983 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'feRvv?m4XsLtO/iN8)|QbS1Xva})[l0#Yfr)k;%Z3' nqn.2016-06.io.spdk:cnode21844 00:17:33.243 [2024-11-19 16:23:23.565812] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21844: invalid model number 'feRvv?m4XsLtO/iN8)|QbS1Xva})[l0#Yfr)k;%Z3' 00:17:33.504 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:33.504 { 00:17:33.504 "nqn": "nqn.2016-06.io.spdk:cnode21844", 00:17:33.504 "model_number": "feRvv?m4XsLtO/iN8)|QbS1Xva})[l0#Yfr)k;%Z3", 00:17:33.504 "method": "nvmf_create_subsystem", 00:17:33.504 "req_id": 1 00:17:33.504 } 00:17:33.504 Got JSON-RPC error response 00:17:33.504 response: 00:17:33.504 { 00:17:33.504 "code": -32602, 00:17:33.504 "message": "Invalid MN feRvv?m4XsLtO/iN8)|QbS1Xva})[l0#Yfr)k;%Z3" 00:17:33.504 }' 00:17:33.504 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:33.504 { 00:17:33.504 "nqn": "nqn.2016-06.io.spdk:cnode21844", 00:17:33.504 "model_number": "feRvv?m4XsLtO/iN8)|QbS1Xva})[l0#Yfr)k;%Z3", 00:17:33.504 "method": "nvmf_create_subsystem", 00:17:33.504 "req_id": 1 00:17:33.504 } 00:17:33.504 Got JSON-RPC error response 00:17:33.504 response: 00:17:33.504 { 00:17:33.504 "code": -32602, 00:17:33.504 "message": "Invalid MN feRvv?m4XsLtO/iN8)|QbS1Xva})[l0#Yfr)k;%Z3" 00:17:33.504 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:33.504 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:33.504 [2024-11-19 16:23:23.834790] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.764 16:23:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:34.023 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:34.023 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:34.023 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:34.023 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:34.023 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:34.281 [2024-11-19 16:23:24.372639] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:34.281 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:34.281 { 00:17:34.281 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:34.281 "listen_address": { 00:17:34.281 "trtype": "tcp", 00:17:34.281 "traddr": "", 00:17:34.281 "trsvcid": "4421" 00:17:34.281 }, 00:17:34.281 "method": "nvmf_subsystem_remove_listener", 00:17:34.281 "req_id": 1 00:17:34.281 } 00:17:34.281 Got JSON-RPC error response 00:17:34.281 response: 00:17:34.281 { 00:17:34.281 "code": -32602, 00:17:34.281 "message": "Invalid parameters" 00:17:34.281 }' 00:17:34.281 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:34.281 { 00:17:34.281 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:34.281 "listen_address": { 00:17:34.281 "trtype": "tcp", 00:17:34.281 "traddr": "", 00:17:34.281 "trsvcid": "4421" 00:17:34.281 }, 00:17:34.281 "method": "nvmf_subsystem_remove_listener", 00:17:34.281 "req_id": 1 00:17:34.281 } 00:17:34.281 Got JSON-RPC error response 00:17:34.281 response: 00:17:34.281 { 00:17:34.281 "code": -32602, 00:17:34.281 "message": "Invalid parameters" 00:17:34.281 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:34.281 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode714 -i 0 00:17:34.539 [2024-11-19 16:23:24.653554] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode714: invalid cntlid range [0-65519] 00:17:34.539 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:34.539 { 00:17:34.540 "nqn": "nqn.2016-06.io.spdk:cnode714", 00:17:34.540 "min_cntlid": 0, 00:17:34.540 "method": "nvmf_create_subsystem", 00:17:34.540 "req_id": 1 00:17:34.540 } 00:17:34.540 Got JSON-RPC error response 00:17:34.540 response: 00:17:34.540 { 00:17:34.540 "code": -32602, 00:17:34.540 "message": "Invalid cntlid range [0-65519]" 00:17:34.540 }' 00:17:34.540 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:34.540 { 00:17:34.540 "nqn": "nqn.2016-06.io.spdk:cnode714", 00:17:34.540 "min_cntlid": 0, 00:17:34.540 "method": "nvmf_create_subsystem", 00:17:34.540 "req_id": 1 00:17:34.540 } 00:17:34.540 Got JSON-RPC error response 00:17:34.540 response: 00:17:34.540 { 00:17:34.540 "code": -32602, 00:17:34.540 "message": "Invalid cntlid range [0-65519]" 00:17:34.540 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:34.540 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31179 -i 65520 00:17:34.798 [2024-11-19 16:23:24.918404] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31179: invalid cntlid range [65520-65519] 00:17:34.798 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:34.798 { 00:17:34.798 "nqn": "nqn.2016-06.io.spdk:cnode31179", 00:17:34.798 "min_cntlid": 65520, 00:17:34.798 "method": "nvmf_create_subsystem", 00:17:34.798 "req_id": 1 00:17:34.798 } 00:17:34.798 Got JSON-RPC error response 00:17:34.798 response: 00:17:34.798 { 00:17:34.798 "code": -32602, 00:17:34.798 "message": "Invalid cntlid range [65520-65519]" 00:17:34.798 }' 00:17:34.798 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:34.798 { 00:17:34.798 "nqn": "nqn.2016-06.io.spdk:cnode31179", 00:17:34.798 "min_cntlid": 65520, 00:17:34.798 "method": "nvmf_create_subsystem", 00:17:34.798 "req_id": 1 00:17:34.798 } 00:17:34.798 Got JSON-RPC error response 00:17:34.798 response: 00:17:34.798 { 00:17:34.798 "code": -32602, 00:17:34.798 "message": "Invalid cntlid range [65520-65519]" 00:17:34.798 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:34.798 16:23:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode16497 -I 0 00:17:35.057 [2024-11-19 16:23:25.179268] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16497: invalid cntlid range [1-0] 00:17:35.057 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:35.057 { 00:17:35.057 "nqn": "nqn.2016-06.io.spdk:cnode16497", 00:17:35.057 "max_cntlid": 0, 00:17:35.057 "method": "nvmf_create_subsystem", 00:17:35.057 "req_id": 1 00:17:35.057 } 00:17:35.057 Got JSON-RPC error response 00:17:35.057 response: 00:17:35.057 { 00:17:35.057 "code": -32602, 00:17:35.057 "message": "Invalid cntlid range [1-0]" 00:17:35.057 }' 00:17:35.057 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:35.057 { 00:17:35.057 "nqn": "nqn.2016-06.io.spdk:cnode16497", 00:17:35.057 "max_cntlid": 0, 00:17:35.057 "method": "nvmf_create_subsystem", 00:17:35.057 "req_id": 1 00:17:35.057 } 00:17:35.057 Got JSON-RPC error response 00:17:35.057 response: 00:17:35.057 { 00:17:35.057 "code": -32602, 00:17:35.057 "message": "Invalid cntlid range [1-0]" 00:17:35.057 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:35.057 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25388 -I 65520 00:17:35.315 [2024-11-19 16:23:25.444156] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25388: invalid cntlid range [1-65520] 00:17:35.315 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:35.315 { 00:17:35.315 "nqn": "nqn.2016-06.io.spdk:cnode25388", 00:17:35.315 "max_cntlid": 65520, 00:17:35.315 "method": "nvmf_create_subsystem", 00:17:35.315 "req_id": 1 00:17:35.315 } 00:17:35.315 Got JSON-RPC error response 00:17:35.315 response: 00:17:35.315 { 00:17:35.315 "code": -32602, 00:17:35.315 "message": "Invalid cntlid range [1-65520]" 00:17:35.315 }' 00:17:35.315 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:35.315 { 00:17:35.315 "nqn": "nqn.2016-06.io.spdk:cnode25388", 00:17:35.315 "max_cntlid": 65520, 00:17:35.315 "method": "nvmf_create_subsystem", 00:17:35.315 "req_id": 1 00:17:35.315 } 00:17:35.315 Got JSON-RPC error response 00:17:35.315 response: 00:17:35.315 { 00:17:35.315 "code": -32602, 00:17:35.315 "message": "Invalid cntlid range [1-65520]" 00:17:35.315 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:35.315 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22977 -i 6 -I 5 00:17:35.574 [2024-11-19 16:23:25.725087] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22977: invalid cntlid range [6-5] 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:35.574 { 00:17:35.574 "nqn": "nqn.2016-06.io.spdk:cnode22977", 00:17:35.574 "min_cntlid": 6, 00:17:35.574 "max_cntlid": 5, 00:17:35.574 "method": "nvmf_create_subsystem", 00:17:35.574 "req_id": 1 00:17:35.574 } 00:17:35.574 Got JSON-RPC error response 00:17:35.574 response: 00:17:35.574 { 00:17:35.574 "code": -32602, 00:17:35.574 "message": "Invalid cntlid range [6-5]" 00:17:35.574 }' 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:35.574 { 00:17:35.574 "nqn": "nqn.2016-06.io.spdk:cnode22977", 00:17:35.574 "min_cntlid": 6, 00:17:35.574 "max_cntlid": 5, 00:17:35.574 "method": "nvmf_create_subsystem", 00:17:35.574 "req_id": 1 00:17:35.574 } 00:17:35.574 Got JSON-RPC error response 00:17:35.574 response: 00:17:35.574 { 00:17:35.574 "code": -32602, 00:17:35.574 "message": "Invalid cntlid range [6-5]" 00:17:35.574 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:35.574 { 00:17:35.574 "name": "foobar", 00:17:35.574 "method": "nvmf_delete_target", 00:17:35.574 "req_id": 1 00:17:35.574 } 00:17:35.574 Got JSON-RPC error response 00:17:35.574 response: 00:17:35.574 { 00:17:35.574 "code": -32602, 00:17:35.574 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:35.574 }' 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:35.574 { 00:17:35.574 "name": "foobar", 00:17:35.574 "method": "nvmf_delete_target", 00:17:35.574 "req_id": 1 00:17:35.574 } 00:17:35.574 Got JSON-RPC error response 00:17:35.574 response: 00:17:35.574 { 00:17:35.574 "code": -32602, 00:17:35.574 "message": "The specified target doesn't exist, cannot delete it." 00:17:35.574 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.574 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.574 rmmod nvme_tcp 00:17:35.574 rmmod nvme_fabrics 00:17:35.574 rmmod nvme_keyring 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 214032 ']' 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 214032 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 214032 ']' 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 214032 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 214032 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 214032' 00:17:35.833 killing process with pid 214032 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 214032 00:17:35.833 16:23:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 214032 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.833 16:23:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:38.374 00:17:38.374 real 0m9.057s 00:17:38.374 user 0m21.501s 00:17:38.374 sys 0m2.553s 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:38.374 ************************************ 00:17:38.374 END TEST nvmf_invalid 00:17:38.374 ************************************ 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:38.374 ************************************ 00:17:38.374 START TEST nvmf_connect_stress 00:17:38.374 ************************************ 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:38.374 * Looking for test storage... 00:17:38.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:38.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.374 --rc genhtml_branch_coverage=1 00:17:38.374 --rc genhtml_function_coverage=1 00:17:38.374 --rc genhtml_legend=1 00:17:38.374 --rc geninfo_all_blocks=1 00:17:38.374 --rc geninfo_unexecuted_blocks=1 00:17:38.374 00:17:38.374 ' 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:38.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.374 --rc genhtml_branch_coverage=1 00:17:38.374 --rc genhtml_function_coverage=1 00:17:38.374 --rc genhtml_legend=1 00:17:38.374 --rc geninfo_all_blocks=1 00:17:38.374 --rc geninfo_unexecuted_blocks=1 00:17:38.374 00:17:38.374 ' 00:17:38.374 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:38.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.374 --rc genhtml_branch_coverage=1 00:17:38.374 --rc genhtml_function_coverage=1 00:17:38.374 --rc genhtml_legend=1 00:17:38.375 --rc geninfo_all_blocks=1 00:17:38.375 --rc geninfo_unexecuted_blocks=1 00:17:38.375 00:17:38.375 ' 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:38.375 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.375 --rc genhtml_branch_coverage=1 00:17:38.375 --rc genhtml_function_coverage=1 00:17:38.375 --rc genhtml_legend=1 00:17:38.375 --rc geninfo_all_blocks=1 00:17:38.375 --rc geninfo_unexecuted_blocks=1 00:17:38.375 00:17:38.375 ' 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:38.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:38.375 16:23:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:40.397 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:40.397 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:40.397 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:40.397 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:40.397 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:40.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:40.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:17:40.398 00:17:40.398 --- 10.0.0.2 ping statistics --- 00:17:40.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.398 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:40.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:40.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:17:40.398 00:17:40.398 --- 10.0.0.1 ping statistics --- 00:17:40.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:40.398 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:40.398 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=216678 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 216678 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 216678 ']' 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.678 16:23:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.678 [2024-11-19 16:23:30.764635] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:17:40.678 [2024-11-19 16:23:30.764710] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:40.678 [2024-11-19 16:23:30.837186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:40.678 [2024-11-19 16:23:30.883245] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:40.678 [2024-11-19 16:23:30.883298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:40.678 [2024-11-19 16:23:30.883326] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:40.678 [2024-11-19 16:23:30.883337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:40.678 [2024-11-19 16:23:30.883347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:40.678 [2024-11-19 16:23:30.884904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:40.678 [2024-11-19 16:23:30.884972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:40.678 [2024-11-19 16:23:30.884975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:40.678 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:40.678 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:40.678 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:40.678 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.946 [2024-11-19 16:23:31.031007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.946 [2024-11-19 16:23:31.048516] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:40.946 NULL1 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=216704 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:40.946 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.947 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.224 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.224 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:41.224 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.224 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.224 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.502 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.502 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:41.502 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.502 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.502 16:23:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.782 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.782 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:41.782 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:41.782 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.782 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.373 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.373 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:42.373 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.373 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.373 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.639 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.639 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:42.639 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.639 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.639 16:23:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:42.946 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.946 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:42.946 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:42.946 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.946 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.227 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.227 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:43.227 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.227 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.227 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.510 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.510 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:43.510 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.510 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.510 16:23:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:43.780 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.780 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:43.780 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:43.780 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.780 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.052 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.052 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:44.052 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.052 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.052 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.333 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.333 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:44.333 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.333 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.333 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:44.938 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.938 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:44.938 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:44.939 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.939 16:23:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.213 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.213 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:45.213 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.213 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.213 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.482 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.482 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:45.482 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.482 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.482 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:45.759 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.759 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:45.759 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:45.759 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.759 16:23:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.037 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.037 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:46.037 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.037 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.037 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.318 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.318 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:46.318 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.318 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.318 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:46.576 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.576 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:46.576 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:46.576 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.576 16:23:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.143 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.143 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:47.143 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.143 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.143 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.402 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.402 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:47.402 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.402 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.402 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.661 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.661 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:47.661 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.661 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.661 16:23:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:47.919 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.919 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:47.919 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:47.919 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.919 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.177 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.177 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:48.177 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.177 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.177 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:48.747 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.747 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:48.747 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:48.747 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.747 16:23:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.007 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.007 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:49.007 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.007 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.007 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.267 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.267 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:49.267 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.267 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.267 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.526 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.526 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:49.526 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.526 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.526 16:23:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:49.784 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.784 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:49.784 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:49.784 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.784 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.352 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.352 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:50.352 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.352 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.352 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.612 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.612 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:50.612 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.612 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.612 16:23:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:50.872 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.872 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:50.872 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:50.872 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.872 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:51.131 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 216704 00:17:51.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (216704) - No such process 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 216704 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:51.131 rmmod nvme_tcp 00:17:51.131 rmmod nvme_fabrics 00:17:51.131 rmmod nvme_keyring 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 216678 ']' 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 216678 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 216678 ']' 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 216678 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:51.131 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 216678 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 216678' 00:17:51.390 killing process with pid 216678 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 216678 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 216678 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:51.390 16:23:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:53.934 00:17:53.934 real 0m15.461s 00:17:53.934 user 0m40.068s 00:17:53.934 sys 0m4.610s 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:53.934 ************************************ 00:17:53.934 END TEST nvmf_connect_stress 00:17:53.934 ************************************ 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:53.934 ************************************ 00:17:53.934 START TEST nvmf_fused_ordering 00:17:53.934 ************************************ 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:53.934 * Looking for test storage... 00:17:53.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:53.934 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:53.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.935 --rc genhtml_branch_coverage=1 00:17:53.935 --rc genhtml_function_coverage=1 00:17:53.935 --rc genhtml_legend=1 00:17:53.935 --rc geninfo_all_blocks=1 00:17:53.935 --rc geninfo_unexecuted_blocks=1 00:17:53.935 00:17:53.935 ' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:53.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.935 --rc genhtml_branch_coverage=1 00:17:53.935 --rc genhtml_function_coverage=1 00:17:53.935 --rc genhtml_legend=1 00:17:53.935 --rc geninfo_all_blocks=1 00:17:53.935 --rc geninfo_unexecuted_blocks=1 00:17:53.935 00:17:53.935 ' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:53.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.935 --rc genhtml_branch_coverage=1 00:17:53.935 --rc genhtml_function_coverage=1 00:17:53.935 --rc genhtml_legend=1 00:17:53.935 --rc geninfo_all_blocks=1 00:17:53.935 --rc geninfo_unexecuted_blocks=1 00:17:53.935 00:17:53.935 ' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:53.935 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:53.935 --rc genhtml_branch_coverage=1 00:17:53.935 --rc genhtml_function_coverage=1 00:17:53.935 --rc genhtml_legend=1 00:17:53.935 --rc geninfo_all_blocks=1 00:17:53.935 --rc geninfo_unexecuted_blocks=1 00:17:53.935 00:17:53.935 ' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:53.935 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:53.935 16:23:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:55.838 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:55.838 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:55.839 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:55.839 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:55.839 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:55.839 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:56.096 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:56.096 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:56.096 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:56.096 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:56.096 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:17:56.096 00:17:56.096 --- 10.0.0.2 ping statistics --- 00:17:56.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.096 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:17:56.096 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:56.096 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:56.096 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:17:56.096 00:17:56.096 --- 10.0.0.1 ping statistics --- 00:17:56.096 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:56.096 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:17:56.096 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:56.096 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:56.096 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=220004 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 220004 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 220004 ']' 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:56.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.097 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:56.097 [2024-11-19 16:23:46.264522] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:17:56.097 [2024-11-19 16:23:46.264610] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:56.097 [2024-11-19 16:23:46.337626] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.097 [2024-11-19 16:23:46.382173] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:56.097 [2024-11-19 16:23:46.382229] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:56.097 [2024-11-19 16:23:46.382258] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:56.097 [2024-11-19 16:23:46.382278] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:56.097 [2024-11-19 16:23:46.382288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:56.097 [2024-11-19 16:23:46.382869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:56.356 [2024-11-19 16:23:46.521925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:56.356 [2024-11-19 16:23:46.538200] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:56.356 NULL1 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.356 16:23:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:56.356 [2024-11-19 16:23:46.581233] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:17:56.357 [2024-11-19 16:23:46.581281] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220030 ] 00:17:56.928 Attached to nqn.2016-06.io.spdk:cnode1 00:17:56.928 Namespace ID: 1 size: 1GB 00:17:56.928 fused_ordering(0) 00:17:56.928 fused_ordering(1) 00:17:56.928 fused_ordering(2) 00:17:56.928 fused_ordering(3) 00:17:56.928 fused_ordering(4) 00:17:56.928 fused_ordering(5) 00:17:56.928 fused_ordering(6) 00:17:56.928 fused_ordering(7) 00:17:56.928 fused_ordering(8) 00:17:56.928 fused_ordering(9) 00:17:56.928 fused_ordering(10) 00:17:56.928 fused_ordering(11) 00:17:56.928 fused_ordering(12) 00:17:56.928 fused_ordering(13) 00:17:56.928 fused_ordering(14) 00:17:56.928 fused_ordering(15) 00:17:56.928 fused_ordering(16) 00:17:56.928 fused_ordering(17) 00:17:56.928 fused_ordering(18) 00:17:56.928 fused_ordering(19) 00:17:56.928 fused_ordering(20) 00:17:56.928 fused_ordering(21) 00:17:56.928 fused_ordering(22) 00:17:56.928 fused_ordering(23) 00:17:56.928 fused_ordering(24) 00:17:56.928 fused_ordering(25) 00:17:56.928 fused_ordering(26) 00:17:56.928 fused_ordering(27) 00:17:56.928 fused_ordering(28) 00:17:56.928 fused_ordering(29) 00:17:56.928 fused_ordering(30) 00:17:56.928 fused_ordering(31) 00:17:56.928 fused_ordering(32) 00:17:56.928 fused_ordering(33) 00:17:56.928 fused_ordering(34) 00:17:56.928 fused_ordering(35) 00:17:56.928 fused_ordering(36) 00:17:56.928 fused_ordering(37) 00:17:56.928 fused_ordering(38) 00:17:56.928 fused_ordering(39) 00:17:56.928 fused_ordering(40) 00:17:56.928 fused_ordering(41) 00:17:56.928 fused_ordering(42) 00:17:56.928 fused_ordering(43) 00:17:56.928 fused_ordering(44) 00:17:56.928 fused_ordering(45) 00:17:56.928 fused_ordering(46) 00:17:56.928 fused_ordering(47) 00:17:56.928 fused_ordering(48) 00:17:56.928 fused_ordering(49) 00:17:56.928 fused_ordering(50) 00:17:56.928 fused_ordering(51) 00:17:56.928 fused_ordering(52) 00:17:56.928 fused_ordering(53) 00:17:56.928 fused_ordering(54) 00:17:56.928 fused_ordering(55) 00:17:56.928 fused_ordering(56) 00:17:56.928 fused_ordering(57) 00:17:56.928 fused_ordering(58) 00:17:56.928 fused_ordering(59) 00:17:56.928 fused_ordering(60) 00:17:56.928 fused_ordering(61) 00:17:56.928 fused_ordering(62) 00:17:56.928 fused_ordering(63) 00:17:56.928 fused_ordering(64) 00:17:56.928 fused_ordering(65) 00:17:56.928 fused_ordering(66) 00:17:56.928 fused_ordering(67) 00:17:56.928 fused_ordering(68) 00:17:56.928 fused_ordering(69) 00:17:56.928 fused_ordering(70) 00:17:56.928 fused_ordering(71) 00:17:56.928 fused_ordering(72) 00:17:56.928 fused_ordering(73) 00:17:56.928 fused_ordering(74) 00:17:56.928 fused_ordering(75) 00:17:56.928 fused_ordering(76) 00:17:56.928 fused_ordering(77) 00:17:56.928 fused_ordering(78) 00:17:56.928 fused_ordering(79) 00:17:56.928 fused_ordering(80) 00:17:56.928 fused_ordering(81) 00:17:56.928 fused_ordering(82) 00:17:56.928 fused_ordering(83) 00:17:56.928 fused_ordering(84) 00:17:56.928 fused_ordering(85) 00:17:56.928 fused_ordering(86) 00:17:56.928 fused_ordering(87) 00:17:56.928 fused_ordering(88) 00:17:56.928 fused_ordering(89) 00:17:56.928 fused_ordering(90) 00:17:56.928 fused_ordering(91) 00:17:56.928 fused_ordering(92) 00:17:56.928 fused_ordering(93) 00:17:56.928 fused_ordering(94) 00:17:56.928 fused_ordering(95) 00:17:56.928 fused_ordering(96) 00:17:56.928 fused_ordering(97) 00:17:56.928 fused_ordering(98) 00:17:56.928 fused_ordering(99) 00:17:56.928 fused_ordering(100) 00:17:56.928 fused_ordering(101) 00:17:56.928 fused_ordering(102) 00:17:56.928 fused_ordering(103) 00:17:56.928 fused_ordering(104) 00:17:56.928 fused_ordering(105) 00:17:56.928 fused_ordering(106) 00:17:56.928 fused_ordering(107) 00:17:56.928 fused_ordering(108) 00:17:56.928 fused_ordering(109) 00:17:56.928 fused_ordering(110) 00:17:56.928 fused_ordering(111) 00:17:56.928 fused_ordering(112) 00:17:56.928 fused_ordering(113) 00:17:56.928 fused_ordering(114) 00:17:56.928 fused_ordering(115) 00:17:56.928 fused_ordering(116) 00:17:56.928 fused_ordering(117) 00:17:56.928 fused_ordering(118) 00:17:56.928 fused_ordering(119) 00:17:56.928 fused_ordering(120) 00:17:56.928 fused_ordering(121) 00:17:56.928 fused_ordering(122) 00:17:56.928 fused_ordering(123) 00:17:56.928 fused_ordering(124) 00:17:56.928 fused_ordering(125) 00:17:56.928 fused_ordering(126) 00:17:56.928 fused_ordering(127) 00:17:56.928 fused_ordering(128) 00:17:56.928 fused_ordering(129) 00:17:56.928 fused_ordering(130) 00:17:56.928 fused_ordering(131) 00:17:56.928 fused_ordering(132) 00:17:56.928 fused_ordering(133) 00:17:56.928 fused_ordering(134) 00:17:56.928 fused_ordering(135) 00:17:56.928 fused_ordering(136) 00:17:56.928 fused_ordering(137) 00:17:56.928 fused_ordering(138) 00:17:56.928 fused_ordering(139) 00:17:56.928 fused_ordering(140) 00:17:56.928 fused_ordering(141) 00:17:56.928 fused_ordering(142) 00:17:56.928 fused_ordering(143) 00:17:56.928 fused_ordering(144) 00:17:56.928 fused_ordering(145) 00:17:56.928 fused_ordering(146) 00:17:56.928 fused_ordering(147) 00:17:56.928 fused_ordering(148) 00:17:56.928 fused_ordering(149) 00:17:56.928 fused_ordering(150) 00:17:56.928 fused_ordering(151) 00:17:56.928 fused_ordering(152) 00:17:56.928 fused_ordering(153) 00:17:56.928 fused_ordering(154) 00:17:56.928 fused_ordering(155) 00:17:56.928 fused_ordering(156) 00:17:56.928 fused_ordering(157) 00:17:56.928 fused_ordering(158) 00:17:56.928 fused_ordering(159) 00:17:56.928 fused_ordering(160) 00:17:56.928 fused_ordering(161) 00:17:56.928 fused_ordering(162) 00:17:56.928 fused_ordering(163) 00:17:56.928 fused_ordering(164) 00:17:56.928 fused_ordering(165) 00:17:56.928 fused_ordering(166) 00:17:56.928 fused_ordering(167) 00:17:56.928 fused_ordering(168) 00:17:56.928 fused_ordering(169) 00:17:56.928 fused_ordering(170) 00:17:56.928 fused_ordering(171) 00:17:56.928 fused_ordering(172) 00:17:56.928 fused_ordering(173) 00:17:56.928 fused_ordering(174) 00:17:56.928 fused_ordering(175) 00:17:56.928 fused_ordering(176) 00:17:56.928 fused_ordering(177) 00:17:56.928 fused_ordering(178) 00:17:56.928 fused_ordering(179) 00:17:56.928 fused_ordering(180) 00:17:56.928 fused_ordering(181) 00:17:56.928 fused_ordering(182) 00:17:56.928 fused_ordering(183) 00:17:56.928 fused_ordering(184) 00:17:56.928 fused_ordering(185) 00:17:56.928 fused_ordering(186) 00:17:56.928 fused_ordering(187) 00:17:56.928 fused_ordering(188) 00:17:56.928 fused_ordering(189) 00:17:56.928 fused_ordering(190) 00:17:56.928 fused_ordering(191) 00:17:56.928 fused_ordering(192) 00:17:56.928 fused_ordering(193) 00:17:56.928 fused_ordering(194) 00:17:56.928 fused_ordering(195) 00:17:56.928 fused_ordering(196) 00:17:56.928 fused_ordering(197) 00:17:56.928 fused_ordering(198) 00:17:56.928 fused_ordering(199) 00:17:56.928 fused_ordering(200) 00:17:56.928 fused_ordering(201) 00:17:56.928 fused_ordering(202) 00:17:56.928 fused_ordering(203) 00:17:56.928 fused_ordering(204) 00:17:56.928 fused_ordering(205) 00:17:57.187 fused_ordering(206) 00:17:57.187 fused_ordering(207) 00:17:57.187 fused_ordering(208) 00:17:57.187 fused_ordering(209) 00:17:57.187 fused_ordering(210) 00:17:57.187 fused_ordering(211) 00:17:57.187 fused_ordering(212) 00:17:57.187 fused_ordering(213) 00:17:57.187 fused_ordering(214) 00:17:57.187 fused_ordering(215) 00:17:57.187 fused_ordering(216) 00:17:57.187 fused_ordering(217) 00:17:57.187 fused_ordering(218) 00:17:57.187 fused_ordering(219) 00:17:57.187 fused_ordering(220) 00:17:57.187 fused_ordering(221) 00:17:57.187 fused_ordering(222) 00:17:57.187 fused_ordering(223) 00:17:57.187 fused_ordering(224) 00:17:57.187 fused_ordering(225) 00:17:57.187 fused_ordering(226) 00:17:57.187 fused_ordering(227) 00:17:57.187 fused_ordering(228) 00:17:57.187 fused_ordering(229) 00:17:57.187 fused_ordering(230) 00:17:57.187 fused_ordering(231) 00:17:57.187 fused_ordering(232) 00:17:57.187 fused_ordering(233) 00:17:57.187 fused_ordering(234) 00:17:57.187 fused_ordering(235) 00:17:57.187 fused_ordering(236) 00:17:57.187 fused_ordering(237) 00:17:57.187 fused_ordering(238) 00:17:57.187 fused_ordering(239) 00:17:57.187 fused_ordering(240) 00:17:57.187 fused_ordering(241) 00:17:57.187 fused_ordering(242) 00:17:57.187 fused_ordering(243) 00:17:57.187 fused_ordering(244) 00:17:57.187 fused_ordering(245) 00:17:57.187 fused_ordering(246) 00:17:57.187 fused_ordering(247) 00:17:57.187 fused_ordering(248) 00:17:57.187 fused_ordering(249) 00:17:57.187 fused_ordering(250) 00:17:57.187 fused_ordering(251) 00:17:57.187 fused_ordering(252) 00:17:57.187 fused_ordering(253) 00:17:57.187 fused_ordering(254) 00:17:57.187 fused_ordering(255) 00:17:57.187 fused_ordering(256) 00:17:57.187 fused_ordering(257) 00:17:57.187 fused_ordering(258) 00:17:57.187 fused_ordering(259) 00:17:57.187 fused_ordering(260) 00:17:57.187 fused_ordering(261) 00:17:57.187 fused_ordering(262) 00:17:57.187 fused_ordering(263) 00:17:57.187 fused_ordering(264) 00:17:57.187 fused_ordering(265) 00:17:57.187 fused_ordering(266) 00:17:57.187 fused_ordering(267) 00:17:57.188 fused_ordering(268) 00:17:57.188 fused_ordering(269) 00:17:57.188 fused_ordering(270) 00:17:57.188 fused_ordering(271) 00:17:57.188 fused_ordering(272) 00:17:57.188 fused_ordering(273) 00:17:57.188 fused_ordering(274) 00:17:57.188 fused_ordering(275) 00:17:57.188 fused_ordering(276) 00:17:57.188 fused_ordering(277) 00:17:57.188 fused_ordering(278) 00:17:57.188 fused_ordering(279) 00:17:57.188 fused_ordering(280) 00:17:57.188 fused_ordering(281) 00:17:57.188 fused_ordering(282) 00:17:57.188 fused_ordering(283) 00:17:57.188 fused_ordering(284) 00:17:57.188 fused_ordering(285) 00:17:57.188 fused_ordering(286) 00:17:57.188 fused_ordering(287) 00:17:57.188 fused_ordering(288) 00:17:57.188 fused_ordering(289) 00:17:57.188 fused_ordering(290) 00:17:57.188 fused_ordering(291) 00:17:57.188 fused_ordering(292) 00:17:57.188 fused_ordering(293) 00:17:57.188 fused_ordering(294) 00:17:57.188 fused_ordering(295) 00:17:57.188 fused_ordering(296) 00:17:57.188 fused_ordering(297) 00:17:57.188 fused_ordering(298) 00:17:57.188 fused_ordering(299) 00:17:57.188 fused_ordering(300) 00:17:57.188 fused_ordering(301) 00:17:57.188 fused_ordering(302) 00:17:57.188 fused_ordering(303) 00:17:57.188 fused_ordering(304) 00:17:57.188 fused_ordering(305) 00:17:57.188 fused_ordering(306) 00:17:57.188 fused_ordering(307) 00:17:57.188 fused_ordering(308) 00:17:57.188 fused_ordering(309) 00:17:57.188 fused_ordering(310) 00:17:57.188 fused_ordering(311) 00:17:57.188 fused_ordering(312) 00:17:57.188 fused_ordering(313) 00:17:57.188 fused_ordering(314) 00:17:57.188 fused_ordering(315) 00:17:57.188 fused_ordering(316) 00:17:57.188 fused_ordering(317) 00:17:57.188 fused_ordering(318) 00:17:57.188 fused_ordering(319) 00:17:57.188 fused_ordering(320) 00:17:57.188 fused_ordering(321) 00:17:57.188 fused_ordering(322) 00:17:57.188 fused_ordering(323) 00:17:57.188 fused_ordering(324) 00:17:57.188 fused_ordering(325) 00:17:57.188 fused_ordering(326) 00:17:57.188 fused_ordering(327) 00:17:57.188 fused_ordering(328) 00:17:57.188 fused_ordering(329) 00:17:57.188 fused_ordering(330) 00:17:57.188 fused_ordering(331) 00:17:57.188 fused_ordering(332) 00:17:57.188 fused_ordering(333) 00:17:57.188 fused_ordering(334) 00:17:57.188 fused_ordering(335) 00:17:57.188 fused_ordering(336) 00:17:57.188 fused_ordering(337) 00:17:57.188 fused_ordering(338) 00:17:57.188 fused_ordering(339) 00:17:57.188 fused_ordering(340) 00:17:57.188 fused_ordering(341) 00:17:57.188 fused_ordering(342) 00:17:57.188 fused_ordering(343) 00:17:57.188 fused_ordering(344) 00:17:57.188 fused_ordering(345) 00:17:57.188 fused_ordering(346) 00:17:57.188 fused_ordering(347) 00:17:57.188 fused_ordering(348) 00:17:57.188 fused_ordering(349) 00:17:57.188 fused_ordering(350) 00:17:57.188 fused_ordering(351) 00:17:57.188 fused_ordering(352) 00:17:57.188 fused_ordering(353) 00:17:57.188 fused_ordering(354) 00:17:57.188 fused_ordering(355) 00:17:57.188 fused_ordering(356) 00:17:57.188 fused_ordering(357) 00:17:57.188 fused_ordering(358) 00:17:57.188 fused_ordering(359) 00:17:57.188 fused_ordering(360) 00:17:57.188 fused_ordering(361) 00:17:57.188 fused_ordering(362) 00:17:57.188 fused_ordering(363) 00:17:57.188 fused_ordering(364) 00:17:57.188 fused_ordering(365) 00:17:57.188 fused_ordering(366) 00:17:57.188 fused_ordering(367) 00:17:57.188 fused_ordering(368) 00:17:57.188 fused_ordering(369) 00:17:57.188 fused_ordering(370) 00:17:57.188 fused_ordering(371) 00:17:57.188 fused_ordering(372) 00:17:57.188 fused_ordering(373) 00:17:57.188 fused_ordering(374) 00:17:57.188 fused_ordering(375) 00:17:57.188 fused_ordering(376) 00:17:57.188 fused_ordering(377) 00:17:57.188 fused_ordering(378) 00:17:57.188 fused_ordering(379) 00:17:57.188 fused_ordering(380) 00:17:57.188 fused_ordering(381) 00:17:57.188 fused_ordering(382) 00:17:57.188 fused_ordering(383) 00:17:57.188 fused_ordering(384) 00:17:57.188 fused_ordering(385) 00:17:57.188 fused_ordering(386) 00:17:57.188 fused_ordering(387) 00:17:57.188 fused_ordering(388) 00:17:57.188 fused_ordering(389) 00:17:57.188 fused_ordering(390) 00:17:57.188 fused_ordering(391) 00:17:57.188 fused_ordering(392) 00:17:57.188 fused_ordering(393) 00:17:57.188 fused_ordering(394) 00:17:57.188 fused_ordering(395) 00:17:57.188 fused_ordering(396) 00:17:57.188 fused_ordering(397) 00:17:57.188 fused_ordering(398) 00:17:57.188 fused_ordering(399) 00:17:57.188 fused_ordering(400) 00:17:57.188 fused_ordering(401) 00:17:57.188 fused_ordering(402) 00:17:57.188 fused_ordering(403) 00:17:57.188 fused_ordering(404) 00:17:57.188 fused_ordering(405) 00:17:57.188 fused_ordering(406) 00:17:57.188 fused_ordering(407) 00:17:57.188 fused_ordering(408) 00:17:57.188 fused_ordering(409) 00:17:57.188 fused_ordering(410) 00:17:57.755 fused_ordering(411) 00:17:57.755 fused_ordering(412) 00:17:57.755 fused_ordering(413) 00:17:57.755 fused_ordering(414) 00:17:57.755 fused_ordering(415) 00:17:57.755 fused_ordering(416) 00:17:57.755 fused_ordering(417) 00:17:57.755 fused_ordering(418) 00:17:57.755 fused_ordering(419) 00:17:57.755 fused_ordering(420) 00:17:57.755 fused_ordering(421) 00:17:57.755 fused_ordering(422) 00:17:57.755 fused_ordering(423) 00:17:57.755 fused_ordering(424) 00:17:57.755 fused_ordering(425) 00:17:57.755 fused_ordering(426) 00:17:57.755 fused_ordering(427) 00:17:57.755 fused_ordering(428) 00:17:57.755 fused_ordering(429) 00:17:57.756 fused_ordering(430) 00:17:57.756 fused_ordering(431) 00:17:57.756 fused_ordering(432) 00:17:57.756 fused_ordering(433) 00:17:57.756 fused_ordering(434) 00:17:57.756 fused_ordering(435) 00:17:57.756 fused_ordering(436) 00:17:57.756 fused_ordering(437) 00:17:57.756 fused_ordering(438) 00:17:57.756 fused_ordering(439) 00:17:57.756 fused_ordering(440) 00:17:57.756 fused_ordering(441) 00:17:57.756 fused_ordering(442) 00:17:57.756 fused_ordering(443) 00:17:57.756 fused_ordering(444) 00:17:57.756 fused_ordering(445) 00:17:57.756 fused_ordering(446) 00:17:57.756 fused_ordering(447) 00:17:57.756 fused_ordering(448) 00:17:57.756 fused_ordering(449) 00:17:57.756 fused_ordering(450) 00:17:57.756 fused_ordering(451) 00:17:57.756 fused_ordering(452) 00:17:57.756 fused_ordering(453) 00:17:57.756 fused_ordering(454) 00:17:57.756 fused_ordering(455) 00:17:57.756 fused_ordering(456) 00:17:57.756 fused_ordering(457) 00:17:57.756 fused_ordering(458) 00:17:57.756 fused_ordering(459) 00:17:57.756 fused_ordering(460) 00:17:57.756 fused_ordering(461) 00:17:57.756 fused_ordering(462) 00:17:57.756 fused_ordering(463) 00:17:57.756 fused_ordering(464) 00:17:57.756 fused_ordering(465) 00:17:57.756 fused_ordering(466) 00:17:57.756 fused_ordering(467) 00:17:57.756 fused_ordering(468) 00:17:57.756 fused_ordering(469) 00:17:57.756 fused_ordering(470) 00:17:57.756 fused_ordering(471) 00:17:57.756 fused_ordering(472) 00:17:57.756 fused_ordering(473) 00:17:57.756 fused_ordering(474) 00:17:57.756 fused_ordering(475) 00:17:57.756 fused_ordering(476) 00:17:57.756 fused_ordering(477) 00:17:57.756 fused_ordering(478) 00:17:57.756 fused_ordering(479) 00:17:57.756 fused_ordering(480) 00:17:57.756 fused_ordering(481) 00:17:57.756 fused_ordering(482) 00:17:57.756 fused_ordering(483) 00:17:57.756 fused_ordering(484) 00:17:57.756 fused_ordering(485) 00:17:57.756 fused_ordering(486) 00:17:57.756 fused_ordering(487) 00:17:57.756 fused_ordering(488) 00:17:57.756 fused_ordering(489) 00:17:57.756 fused_ordering(490) 00:17:57.756 fused_ordering(491) 00:17:57.756 fused_ordering(492) 00:17:57.756 fused_ordering(493) 00:17:57.756 fused_ordering(494) 00:17:57.756 fused_ordering(495) 00:17:57.756 fused_ordering(496) 00:17:57.756 fused_ordering(497) 00:17:57.756 fused_ordering(498) 00:17:57.756 fused_ordering(499) 00:17:57.756 fused_ordering(500) 00:17:57.756 fused_ordering(501) 00:17:57.756 fused_ordering(502) 00:17:57.756 fused_ordering(503) 00:17:57.756 fused_ordering(504) 00:17:57.756 fused_ordering(505) 00:17:57.756 fused_ordering(506) 00:17:57.756 fused_ordering(507) 00:17:57.756 fused_ordering(508) 00:17:57.756 fused_ordering(509) 00:17:57.756 fused_ordering(510) 00:17:57.756 fused_ordering(511) 00:17:57.756 fused_ordering(512) 00:17:57.756 fused_ordering(513) 00:17:57.756 fused_ordering(514) 00:17:57.756 fused_ordering(515) 00:17:57.756 fused_ordering(516) 00:17:57.756 fused_ordering(517) 00:17:57.756 fused_ordering(518) 00:17:57.756 fused_ordering(519) 00:17:57.756 fused_ordering(520) 00:17:57.756 fused_ordering(521) 00:17:57.756 fused_ordering(522) 00:17:57.756 fused_ordering(523) 00:17:57.756 fused_ordering(524) 00:17:57.756 fused_ordering(525) 00:17:57.756 fused_ordering(526) 00:17:57.756 fused_ordering(527) 00:17:57.756 fused_ordering(528) 00:17:57.756 fused_ordering(529) 00:17:57.756 fused_ordering(530) 00:17:57.756 fused_ordering(531) 00:17:57.756 fused_ordering(532) 00:17:57.756 fused_ordering(533) 00:17:57.756 fused_ordering(534) 00:17:57.756 fused_ordering(535) 00:17:57.756 fused_ordering(536) 00:17:57.756 fused_ordering(537) 00:17:57.756 fused_ordering(538) 00:17:57.756 fused_ordering(539) 00:17:57.756 fused_ordering(540) 00:17:57.756 fused_ordering(541) 00:17:57.756 fused_ordering(542) 00:17:57.756 fused_ordering(543) 00:17:57.756 fused_ordering(544) 00:17:57.756 fused_ordering(545) 00:17:57.756 fused_ordering(546) 00:17:57.756 fused_ordering(547) 00:17:57.756 fused_ordering(548) 00:17:57.756 fused_ordering(549) 00:17:57.756 fused_ordering(550) 00:17:57.756 fused_ordering(551) 00:17:57.756 fused_ordering(552) 00:17:57.756 fused_ordering(553) 00:17:57.756 fused_ordering(554) 00:17:57.756 fused_ordering(555) 00:17:57.756 fused_ordering(556) 00:17:57.756 fused_ordering(557) 00:17:57.756 fused_ordering(558) 00:17:57.756 fused_ordering(559) 00:17:57.756 fused_ordering(560) 00:17:57.756 fused_ordering(561) 00:17:57.756 fused_ordering(562) 00:17:57.756 fused_ordering(563) 00:17:57.756 fused_ordering(564) 00:17:57.756 fused_ordering(565) 00:17:57.756 fused_ordering(566) 00:17:57.756 fused_ordering(567) 00:17:57.756 fused_ordering(568) 00:17:57.756 fused_ordering(569) 00:17:57.756 fused_ordering(570) 00:17:57.756 fused_ordering(571) 00:17:57.756 fused_ordering(572) 00:17:57.756 fused_ordering(573) 00:17:57.756 fused_ordering(574) 00:17:57.756 fused_ordering(575) 00:17:57.756 fused_ordering(576) 00:17:57.756 fused_ordering(577) 00:17:57.756 fused_ordering(578) 00:17:57.756 fused_ordering(579) 00:17:57.756 fused_ordering(580) 00:17:57.756 fused_ordering(581) 00:17:57.756 fused_ordering(582) 00:17:57.756 fused_ordering(583) 00:17:57.756 fused_ordering(584) 00:17:57.756 fused_ordering(585) 00:17:57.756 fused_ordering(586) 00:17:57.756 fused_ordering(587) 00:17:57.756 fused_ordering(588) 00:17:57.756 fused_ordering(589) 00:17:57.756 fused_ordering(590) 00:17:57.756 fused_ordering(591) 00:17:57.756 fused_ordering(592) 00:17:57.756 fused_ordering(593) 00:17:57.756 fused_ordering(594) 00:17:57.756 fused_ordering(595) 00:17:57.756 fused_ordering(596) 00:17:57.756 fused_ordering(597) 00:17:57.756 fused_ordering(598) 00:17:57.756 fused_ordering(599) 00:17:57.756 fused_ordering(600) 00:17:57.756 fused_ordering(601) 00:17:57.756 fused_ordering(602) 00:17:57.756 fused_ordering(603) 00:17:57.756 fused_ordering(604) 00:17:57.756 fused_ordering(605) 00:17:57.756 fused_ordering(606) 00:17:57.756 fused_ordering(607) 00:17:57.756 fused_ordering(608) 00:17:57.756 fused_ordering(609) 00:17:57.756 fused_ordering(610) 00:17:57.756 fused_ordering(611) 00:17:57.756 fused_ordering(612) 00:17:57.756 fused_ordering(613) 00:17:57.756 fused_ordering(614) 00:17:57.756 fused_ordering(615) 00:17:58.015 fused_ordering(616) 00:17:58.015 fused_ordering(617) 00:17:58.015 fused_ordering(618) 00:17:58.015 fused_ordering(619) 00:17:58.015 fused_ordering(620) 00:17:58.015 fused_ordering(621) 00:17:58.015 fused_ordering(622) 00:17:58.015 fused_ordering(623) 00:17:58.015 fused_ordering(624) 00:17:58.015 fused_ordering(625) 00:17:58.015 fused_ordering(626) 00:17:58.015 fused_ordering(627) 00:17:58.015 fused_ordering(628) 00:17:58.015 fused_ordering(629) 00:17:58.015 fused_ordering(630) 00:17:58.015 fused_ordering(631) 00:17:58.015 fused_ordering(632) 00:17:58.015 fused_ordering(633) 00:17:58.015 fused_ordering(634) 00:17:58.015 fused_ordering(635) 00:17:58.015 fused_ordering(636) 00:17:58.015 fused_ordering(637) 00:17:58.015 fused_ordering(638) 00:17:58.015 fused_ordering(639) 00:17:58.015 fused_ordering(640) 00:17:58.015 fused_ordering(641) 00:17:58.015 fused_ordering(642) 00:17:58.015 fused_ordering(643) 00:17:58.015 fused_ordering(644) 00:17:58.015 fused_ordering(645) 00:17:58.015 fused_ordering(646) 00:17:58.015 fused_ordering(647) 00:17:58.015 fused_ordering(648) 00:17:58.015 fused_ordering(649) 00:17:58.015 fused_ordering(650) 00:17:58.015 fused_ordering(651) 00:17:58.015 fused_ordering(652) 00:17:58.015 fused_ordering(653) 00:17:58.015 fused_ordering(654) 00:17:58.015 fused_ordering(655) 00:17:58.015 fused_ordering(656) 00:17:58.015 fused_ordering(657) 00:17:58.015 fused_ordering(658) 00:17:58.015 fused_ordering(659) 00:17:58.015 fused_ordering(660) 00:17:58.015 fused_ordering(661) 00:17:58.015 fused_ordering(662) 00:17:58.015 fused_ordering(663) 00:17:58.015 fused_ordering(664) 00:17:58.015 fused_ordering(665) 00:17:58.015 fused_ordering(666) 00:17:58.015 fused_ordering(667) 00:17:58.015 fused_ordering(668) 00:17:58.015 fused_ordering(669) 00:17:58.015 fused_ordering(670) 00:17:58.015 fused_ordering(671) 00:17:58.015 fused_ordering(672) 00:17:58.015 fused_ordering(673) 00:17:58.015 fused_ordering(674) 00:17:58.015 fused_ordering(675) 00:17:58.015 fused_ordering(676) 00:17:58.015 fused_ordering(677) 00:17:58.015 fused_ordering(678) 00:17:58.015 fused_ordering(679) 00:17:58.015 fused_ordering(680) 00:17:58.015 fused_ordering(681) 00:17:58.015 fused_ordering(682) 00:17:58.015 fused_ordering(683) 00:17:58.015 fused_ordering(684) 00:17:58.015 fused_ordering(685) 00:17:58.015 fused_ordering(686) 00:17:58.015 fused_ordering(687) 00:17:58.015 fused_ordering(688) 00:17:58.015 fused_ordering(689) 00:17:58.015 fused_ordering(690) 00:17:58.015 fused_ordering(691) 00:17:58.015 fused_ordering(692) 00:17:58.015 fused_ordering(693) 00:17:58.015 fused_ordering(694) 00:17:58.015 fused_ordering(695) 00:17:58.015 fused_ordering(696) 00:17:58.015 fused_ordering(697) 00:17:58.015 fused_ordering(698) 00:17:58.015 fused_ordering(699) 00:17:58.015 fused_ordering(700) 00:17:58.015 fused_ordering(701) 00:17:58.015 fused_ordering(702) 00:17:58.015 fused_ordering(703) 00:17:58.015 fused_ordering(704) 00:17:58.015 fused_ordering(705) 00:17:58.015 fused_ordering(706) 00:17:58.015 fused_ordering(707) 00:17:58.015 fused_ordering(708) 00:17:58.015 fused_ordering(709) 00:17:58.015 fused_ordering(710) 00:17:58.015 fused_ordering(711) 00:17:58.015 fused_ordering(712) 00:17:58.015 fused_ordering(713) 00:17:58.015 fused_ordering(714) 00:17:58.015 fused_ordering(715) 00:17:58.015 fused_ordering(716) 00:17:58.015 fused_ordering(717) 00:17:58.015 fused_ordering(718) 00:17:58.015 fused_ordering(719) 00:17:58.015 fused_ordering(720) 00:17:58.015 fused_ordering(721) 00:17:58.015 fused_ordering(722) 00:17:58.015 fused_ordering(723) 00:17:58.015 fused_ordering(724) 00:17:58.015 fused_ordering(725) 00:17:58.015 fused_ordering(726) 00:17:58.015 fused_ordering(727) 00:17:58.015 fused_ordering(728) 00:17:58.015 fused_ordering(729) 00:17:58.015 fused_ordering(730) 00:17:58.015 fused_ordering(731) 00:17:58.015 fused_ordering(732) 00:17:58.015 fused_ordering(733) 00:17:58.015 fused_ordering(734) 00:17:58.015 fused_ordering(735) 00:17:58.015 fused_ordering(736) 00:17:58.015 fused_ordering(737) 00:17:58.015 fused_ordering(738) 00:17:58.015 fused_ordering(739) 00:17:58.015 fused_ordering(740) 00:17:58.015 fused_ordering(741) 00:17:58.015 fused_ordering(742) 00:17:58.015 fused_ordering(743) 00:17:58.015 fused_ordering(744) 00:17:58.015 fused_ordering(745) 00:17:58.015 fused_ordering(746) 00:17:58.015 fused_ordering(747) 00:17:58.015 fused_ordering(748) 00:17:58.015 fused_ordering(749) 00:17:58.015 fused_ordering(750) 00:17:58.015 fused_ordering(751) 00:17:58.015 fused_ordering(752) 00:17:58.015 fused_ordering(753) 00:17:58.015 fused_ordering(754) 00:17:58.015 fused_ordering(755) 00:17:58.015 fused_ordering(756) 00:17:58.015 fused_ordering(757) 00:17:58.015 fused_ordering(758) 00:17:58.015 fused_ordering(759) 00:17:58.015 fused_ordering(760) 00:17:58.015 fused_ordering(761) 00:17:58.015 fused_ordering(762) 00:17:58.015 fused_ordering(763) 00:17:58.015 fused_ordering(764) 00:17:58.015 fused_ordering(765) 00:17:58.015 fused_ordering(766) 00:17:58.015 fused_ordering(767) 00:17:58.015 fused_ordering(768) 00:17:58.015 fused_ordering(769) 00:17:58.015 fused_ordering(770) 00:17:58.015 fused_ordering(771) 00:17:58.015 fused_ordering(772) 00:17:58.015 fused_ordering(773) 00:17:58.015 fused_ordering(774) 00:17:58.015 fused_ordering(775) 00:17:58.015 fused_ordering(776) 00:17:58.015 fused_ordering(777) 00:17:58.015 fused_ordering(778) 00:17:58.016 fused_ordering(779) 00:17:58.016 fused_ordering(780) 00:17:58.016 fused_ordering(781) 00:17:58.016 fused_ordering(782) 00:17:58.016 fused_ordering(783) 00:17:58.016 fused_ordering(784) 00:17:58.016 fused_ordering(785) 00:17:58.016 fused_ordering(786) 00:17:58.016 fused_ordering(787) 00:17:58.016 fused_ordering(788) 00:17:58.016 fused_ordering(789) 00:17:58.016 fused_ordering(790) 00:17:58.016 fused_ordering(791) 00:17:58.016 fused_ordering(792) 00:17:58.016 fused_ordering(793) 00:17:58.016 fused_ordering(794) 00:17:58.016 fused_ordering(795) 00:17:58.016 fused_ordering(796) 00:17:58.016 fused_ordering(797) 00:17:58.016 fused_ordering(798) 00:17:58.016 fused_ordering(799) 00:17:58.016 fused_ordering(800) 00:17:58.016 fused_ordering(801) 00:17:58.016 fused_ordering(802) 00:17:58.016 fused_ordering(803) 00:17:58.016 fused_ordering(804) 00:17:58.016 fused_ordering(805) 00:17:58.016 fused_ordering(806) 00:17:58.016 fused_ordering(807) 00:17:58.016 fused_ordering(808) 00:17:58.016 fused_ordering(809) 00:17:58.016 fused_ordering(810) 00:17:58.016 fused_ordering(811) 00:17:58.016 fused_ordering(812) 00:17:58.016 fused_ordering(813) 00:17:58.016 fused_ordering(814) 00:17:58.016 fused_ordering(815) 00:17:58.016 fused_ordering(816) 00:17:58.016 fused_ordering(817) 00:17:58.016 fused_ordering(818) 00:17:58.016 fused_ordering(819) 00:17:58.016 fused_ordering(820) 00:17:58.584 fused_ordering(821) 00:17:58.584 fused_ordering(822) 00:17:58.584 fused_ordering(823) 00:17:58.584 fused_ordering(824) 00:17:58.584 fused_ordering(825) 00:17:58.584 fused_ordering(826) 00:17:58.584 fused_ordering(827) 00:17:58.584 fused_ordering(828) 00:17:58.584 fused_ordering(829) 00:17:58.584 fused_ordering(830) 00:17:58.584 fused_ordering(831) 00:17:58.584 fused_ordering(832) 00:17:58.584 fused_ordering(833) 00:17:58.584 fused_ordering(834) 00:17:58.584 fused_ordering(835) 00:17:58.584 fused_ordering(836) 00:17:58.584 fused_ordering(837) 00:17:58.584 fused_ordering(838) 00:17:58.584 fused_ordering(839) 00:17:58.584 fused_ordering(840) 00:17:58.584 fused_ordering(841) 00:17:58.584 fused_ordering(842) 00:17:58.584 fused_ordering(843) 00:17:58.584 fused_ordering(844) 00:17:58.584 fused_ordering(845) 00:17:58.584 fused_ordering(846) 00:17:58.584 fused_ordering(847) 00:17:58.584 fused_ordering(848) 00:17:58.584 fused_ordering(849) 00:17:58.584 fused_ordering(850) 00:17:58.584 fused_ordering(851) 00:17:58.584 fused_ordering(852) 00:17:58.584 fused_ordering(853) 00:17:58.584 fused_ordering(854) 00:17:58.584 fused_ordering(855) 00:17:58.584 fused_ordering(856) 00:17:58.584 fused_ordering(857) 00:17:58.584 fused_ordering(858) 00:17:58.584 fused_ordering(859) 00:17:58.584 fused_ordering(860) 00:17:58.584 fused_ordering(861) 00:17:58.584 fused_ordering(862) 00:17:58.584 fused_ordering(863) 00:17:58.584 fused_ordering(864) 00:17:58.584 fused_ordering(865) 00:17:58.584 fused_ordering(866) 00:17:58.584 fused_ordering(867) 00:17:58.584 fused_ordering(868) 00:17:58.584 fused_ordering(869) 00:17:58.584 fused_ordering(870) 00:17:58.584 fused_ordering(871) 00:17:58.584 fused_ordering(872) 00:17:58.584 fused_ordering(873) 00:17:58.584 fused_ordering(874) 00:17:58.584 fused_ordering(875) 00:17:58.584 fused_ordering(876) 00:17:58.584 fused_ordering(877) 00:17:58.584 fused_ordering(878) 00:17:58.584 fused_ordering(879) 00:17:58.584 fused_ordering(880) 00:17:58.584 fused_ordering(881) 00:17:58.584 fused_ordering(882) 00:17:58.584 fused_ordering(883) 00:17:58.584 fused_ordering(884) 00:17:58.584 fused_ordering(885) 00:17:58.584 fused_ordering(886) 00:17:58.584 fused_ordering(887) 00:17:58.584 fused_ordering(888) 00:17:58.584 fused_ordering(889) 00:17:58.584 fused_ordering(890) 00:17:58.584 fused_ordering(891) 00:17:58.584 fused_ordering(892) 00:17:58.584 fused_ordering(893) 00:17:58.584 fused_ordering(894) 00:17:58.584 fused_ordering(895) 00:17:58.584 fused_ordering(896) 00:17:58.584 fused_ordering(897) 00:17:58.584 fused_ordering(898) 00:17:58.584 fused_ordering(899) 00:17:58.584 fused_ordering(900) 00:17:58.584 fused_ordering(901) 00:17:58.584 fused_ordering(902) 00:17:58.584 fused_ordering(903) 00:17:58.584 fused_ordering(904) 00:17:58.584 fused_ordering(905) 00:17:58.584 fused_ordering(906) 00:17:58.584 fused_ordering(907) 00:17:58.584 fused_ordering(908) 00:17:58.584 fused_ordering(909) 00:17:58.584 fused_ordering(910) 00:17:58.584 fused_ordering(911) 00:17:58.584 fused_ordering(912) 00:17:58.584 fused_ordering(913) 00:17:58.584 fused_ordering(914) 00:17:58.584 fused_ordering(915) 00:17:58.584 fused_ordering(916) 00:17:58.584 fused_ordering(917) 00:17:58.584 fused_ordering(918) 00:17:58.584 fused_ordering(919) 00:17:58.584 fused_ordering(920) 00:17:58.584 fused_ordering(921) 00:17:58.584 fused_ordering(922) 00:17:58.584 fused_ordering(923) 00:17:58.584 fused_ordering(924) 00:17:58.584 fused_ordering(925) 00:17:58.584 fused_ordering(926) 00:17:58.584 fused_ordering(927) 00:17:58.584 fused_ordering(928) 00:17:58.584 fused_ordering(929) 00:17:58.584 fused_ordering(930) 00:17:58.584 fused_ordering(931) 00:17:58.584 fused_ordering(932) 00:17:58.584 fused_ordering(933) 00:17:58.584 fused_ordering(934) 00:17:58.584 fused_ordering(935) 00:17:58.584 fused_ordering(936) 00:17:58.584 fused_ordering(937) 00:17:58.584 fused_ordering(938) 00:17:58.584 fused_ordering(939) 00:17:58.584 fused_ordering(940) 00:17:58.584 fused_ordering(941) 00:17:58.584 fused_ordering(942) 00:17:58.584 fused_ordering(943) 00:17:58.584 fused_ordering(944) 00:17:58.584 fused_ordering(945) 00:17:58.584 fused_ordering(946) 00:17:58.584 fused_ordering(947) 00:17:58.584 fused_ordering(948) 00:17:58.584 fused_ordering(949) 00:17:58.584 fused_ordering(950) 00:17:58.584 fused_ordering(951) 00:17:58.584 fused_ordering(952) 00:17:58.584 fused_ordering(953) 00:17:58.584 fused_ordering(954) 00:17:58.584 fused_ordering(955) 00:17:58.584 fused_ordering(956) 00:17:58.584 fused_ordering(957) 00:17:58.584 fused_ordering(958) 00:17:58.584 fused_ordering(959) 00:17:58.584 fused_ordering(960) 00:17:58.584 fused_ordering(961) 00:17:58.584 fused_ordering(962) 00:17:58.584 fused_ordering(963) 00:17:58.584 fused_ordering(964) 00:17:58.584 fused_ordering(965) 00:17:58.584 fused_ordering(966) 00:17:58.584 fused_ordering(967) 00:17:58.584 fused_ordering(968) 00:17:58.584 fused_ordering(969) 00:17:58.584 fused_ordering(970) 00:17:58.584 fused_ordering(971) 00:17:58.584 fused_ordering(972) 00:17:58.584 fused_ordering(973) 00:17:58.584 fused_ordering(974) 00:17:58.584 fused_ordering(975) 00:17:58.584 fused_ordering(976) 00:17:58.584 fused_ordering(977) 00:17:58.584 fused_ordering(978) 00:17:58.584 fused_ordering(979) 00:17:58.584 fused_ordering(980) 00:17:58.584 fused_ordering(981) 00:17:58.584 fused_ordering(982) 00:17:58.584 fused_ordering(983) 00:17:58.584 fused_ordering(984) 00:17:58.584 fused_ordering(985) 00:17:58.584 fused_ordering(986) 00:17:58.584 fused_ordering(987) 00:17:58.584 fused_ordering(988) 00:17:58.584 fused_ordering(989) 00:17:58.584 fused_ordering(990) 00:17:58.584 fused_ordering(991) 00:17:58.584 fused_ordering(992) 00:17:58.584 fused_ordering(993) 00:17:58.584 fused_ordering(994) 00:17:58.584 fused_ordering(995) 00:17:58.584 fused_ordering(996) 00:17:58.584 fused_ordering(997) 00:17:58.584 fused_ordering(998) 00:17:58.584 fused_ordering(999) 00:17:58.584 fused_ordering(1000) 00:17:58.584 fused_ordering(1001) 00:17:58.584 fused_ordering(1002) 00:17:58.584 fused_ordering(1003) 00:17:58.584 fused_ordering(1004) 00:17:58.584 fused_ordering(1005) 00:17:58.584 fused_ordering(1006) 00:17:58.584 fused_ordering(1007) 00:17:58.584 fused_ordering(1008) 00:17:58.584 fused_ordering(1009) 00:17:58.584 fused_ordering(1010) 00:17:58.584 fused_ordering(1011) 00:17:58.584 fused_ordering(1012) 00:17:58.585 fused_ordering(1013) 00:17:58.585 fused_ordering(1014) 00:17:58.585 fused_ordering(1015) 00:17:58.585 fused_ordering(1016) 00:17:58.585 fused_ordering(1017) 00:17:58.585 fused_ordering(1018) 00:17:58.585 fused_ordering(1019) 00:17:58.585 fused_ordering(1020) 00:17:58.585 fused_ordering(1021) 00:17:58.585 fused_ordering(1022) 00:17:58.585 fused_ordering(1023) 00:17:58.585 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:58.585 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:58.585 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:58.585 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:58.585 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:58.585 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:58.585 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:58.585 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:58.585 rmmod nvme_tcp 00:17:58.585 rmmod nvme_fabrics 00:17:58.585 rmmod nvme_keyring 00:17:58.585 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 220004 ']' 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 220004 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 220004 ']' 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 220004 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 220004 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 220004' 00:17:58.843 killing process with pid 220004 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 220004 00:17:58.843 16:23:48 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 220004 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:58.843 16:23:49 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.378 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:01.378 00:18:01.378 real 0m7.458s 00:18:01.378 user 0m4.804s 00:18:01.378 sys 0m3.080s 00:18:01.378 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:01.378 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:18:01.378 ************************************ 00:18:01.378 END TEST nvmf_fused_ordering 00:18:01.378 ************************************ 00:18:01.378 16:23:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:18:01.378 16:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:01.379 ************************************ 00:18:01.379 START TEST nvmf_ns_masking 00:18:01.379 ************************************ 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:18:01.379 * Looking for test storage... 00:18:01.379 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.379 --rc genhtml_branch_coverage=1 00:18:01.379 --rc genhtml_function_coverage=1 00:18:01.379 --rc genhtml_legend=1 00:18:01.379 --rc geninfo_all_blocks=1 00:18:01.379 --rc geninfo_unexecuted_blocks=1 00:18:01.379 00:18:01.379 ' 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.379 --rc genhtml_branch_coverage=1 00:18:01.379 --rc genhtml_function_coverage=1 00:18:01.379 --rc genhtml_legend=1 00:18:01.379 --rc geninfo_all_blocks=1 00:18:01.379 --rc geninfo_unexecuted_blocks=1 00:18:01.379 00:18:01.379 ' 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.379 --rc genhtml_branch_coverage=1 00:18:01.379 --rc genhtml_function_coverage=1 00:18:01.379 --rc genhtml_legend=1 00:18:01.379 --rc geninfo_all_blocks=1 00:18:01.379 --rc geninfo_unexecuted_blocks=1 00:18:01.379 00:18:01.379 ' 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:01.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:01.379 --rc genhtml_branch_coverage=1 00:18:01.379 --rc genhtml_function_coverage=1 00:18:01.379 --rc genhtml_legend=1 00:18:01.379 --rc geninfo_all_blocks=1 00:18:01.379 --rc geninfo_unexecuted_blocks=1 00:18:01.379 00:18:01.379 ' 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:01.379 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:01.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=dbafca9a-88d3-490f-9805-0a6f10f7bab8 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=2ebab28c-ca26-43f8-b16f-9dc64e0ca157 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=530a7b91-b0a7-40d3-8644-014ae9a77718 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:18:01.380 16:23:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:03.917 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:03.917 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:03.917 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:03.917 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:03.917 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.917 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:18:03.917 00:18:03.917 --- 10.0.0.2 ping statistics --- 00:18:03.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.917 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:18:03.917 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.917 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.917 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:18:03.917 00:18:03.918 --- 10.0.0.1 ping statistics --- 00:18:03.918 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.918 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=222238 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 222238 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 222238 ']' 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.918 16:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:03.918 [2024-11-19 16:23:53.869945] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:18:03.918 [2024-11-19 16:23:53.870015] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.918 [2024-11-19 16:23:53.941064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.918 [2024-11-19 16:23:53.987163] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:03.918 [2024-11-19 16:23:53.987217] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:03.918 [2024-11-19 16:23:53.987246] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:03.918 [2024-11-19 16:23:53.987258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:03.918 [2024-11-19 16:23:53.987269] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:03.918 [2024-11-19 16:23:53.987838] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.918 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.918 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:03.918 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:03.918 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.918 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:03.918 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.918 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:04.177 [2024-11-19 16:23:54.412591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.177 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:18:04.177 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:18:04.177 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:04.434 Malloc1 00:18:04.434 16:23:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:05.002 Malloc2 00:18:05.002 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:05.259 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:18:05.517 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:05.777 [2024-11-19 16:23:55.890982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.777 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:18:05.777 16:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 530a7b91-b0a7-40d3-8644-014ae9a77718 -a 10.0.0.2 -s 4420 -i 4 00:18:06.038 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:18:06.038 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:06.038 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.038 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:06.038 16:23:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:07.943 [ 0]:0x1 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a7c3d407b2b478f9484b0bf09d865fd 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a7c3d407b2b478f9484b0bf09d865fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:07.943 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:08.510 [ 0]:0x1 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a7c3d407b2b478f9484b0bf09d865fd 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a7c3d407b2b478f9484b0bf09d865fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:08.510 [ 1]:0x2 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5eb8b2c52e84c8e9f2b252b488bae3f 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5eb8b2c52e84c8e9f2b252b488bae3f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:08.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:08.510 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:08.767 16:23:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:09.030 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:09.030 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 530a7b91-b0a7-40d3-8644-014ae9a77718 -a 10.0.0.2 -s 4420 -i 4 00:18:09.289 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:09.289 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:09.289 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:09.289 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:09.289 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:09.289 16:23:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:11.194 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:11.453 [ 0]:0x2 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5eb8b2c52e84c8e9f2b252b488bae3f 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5eb8b2c52e84c8e9f2b252b488bae3f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.453 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:11.711 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:11.711 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.711 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:11.711 [ 0]:0x1 00:18:11.711 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:11.711 16:24:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.711 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a7c3d407b2b478f9484b0bf09d865fd 00:18:11.711 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a7c3d407b2b478f9484b0bf09d865fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.711 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:11.711 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:11.711 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:11.711 [ 1]:0x2 00:18:11.711 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:11.711 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:11.970 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5eb8b2c52e84c8e9f2b252b488bae3f 00:18:11.970 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5eb8b2c52e84c8e9f2b252b488bae3f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:11.970 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:12.227 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:12.227 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:12.227 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:12.227 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:12.227 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.227 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:12.228 [ 0]:0x2 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5eb8b2c52e84c8e9f2b252b488bae3f 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5eb8b2c52e84c8e9f2b252b488bae3f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:12.228 Failed to open ns nvme0n2, errno 2 00:18:12.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:12.228 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:12.487 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:12.487 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 530a7b91-b0a7-40d3-8644-014ae9a77718 -a 10.0.0.2 -s 4420 -i 4 00:18:12.745 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:12.745 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:12.745 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.745 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:12.745 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:12.745 16:24:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:14.654 [ 0]:0x1 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2a7c3d407b2b478f9484b0bf09d865fd 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2a7c3d407b2b478f9484b0bf09d865fd != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:14.654 [ 1]:0x2 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:14.654 16:24:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:14.914 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5eb8b2c52e84c8e9f2b252b488bae3f 00:18:14.914 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5eb8b2c52e84c8e9f2b252b488bae3f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:14.914 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:15.173 [ 0]:0x2 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5eb8b2c52e84c8e9f2b252b488bae3f 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5eb8b2c52e84c8e9f2b252b488bae3f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:15.173 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:15.432 [2024-11-19 16:24:05.632287] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:15.432 request: 00:18:15.432 { 00:18:15.432 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:15.432 "nsid": 2, 00:18:15.432 "host": "nqn.2016-06.io.spdk:host1", 00:18:15.432 "method": "nvmf_ns_remove_host", 00:18:15.432 "req_id": 1 00:18:15.432 } 00:18:15.432 Got JSON-RPC error response 00:18:15.432 response: 00:18:15.432 { 00:18:15.432 "code": -32602, 00:18:15.432 "message": "Invalid parameters" 00:18:15.432 } 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:15.432 [ 0]:0x2 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:15.432 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d5eb8b2c52e84c8e9f2b252b488bae3f 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d5eb8b2c52e84c8e9f2b252b488bae3f != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:15.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=223969 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 223969 /var/tmp/host.sock 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 223969 ']' 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:15.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.691 16:24:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:15.691 [2024-11-19 16:24:05.983133] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:18:15.691 [2024-11-19 16:24:05.983207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid223969 ] 00:18:15.949 [2024-11-19 16:24:06.052621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.949 [2024-11-19 16:24:06.100630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.207 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.207 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:16.207 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:16.465 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:16.723 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid dbafca9a-88d3-490f-9805-0a6f10f7bab8 00:18:16.723 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:16.723 16:24:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DBAFCA9A88D3490F98050A6F10F7BAB8 -i 00:18:16.980 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 2ebab28c-ca26-43f8-b16f-9dc64e0ca157 00:18:16.980 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:16.980 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 2EBAB28CCA2643F8B16F9DC64E0CA157 -i 00:18:17.238 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:17.495 16:24:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:17.753 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:17.753 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:18.320 nvme0n1 00:18:18.320 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:18.320 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:18.578 nvme1n2 00:18:18.837 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:18.837 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:18.837 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:18.837 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:18.837 16:24:08 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:19.096 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:19.096 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:19.096 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:19.096 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:19.354 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ dbafca9a-88d3-490f-9805-0a6f10f7bab8 == \d\b\a\f\c\a\9\a\-\8\8\d\3\-\4\9\0\f\-\9\8\0\5\-\0\a\6\f\1\0\f\7\b\a\b\8 ]] 00:18:19.354 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:19.354 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:19.354 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:19.612 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 2ebab28c-ca26-43f8-b16f-9dc64e0ca157 == \2\e\b\a\b\2\8\c\-\c\a\2\6\-\4\3\f\8\-\b\1\6\f\-\9\d\c\6\4\e\0\c\a\1\5\7 ]] 00:18:19.612 16:24:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:19.871 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid dbafca9a-88d3-490f-9805-0a6f10f7bab8 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g DBAFCA9A88D3490F98050A6F10F7BAB8 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g DBAFCA9A88D3490F98050A6F10F7BAB8 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:20.129 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g DBAFCA9A88D3490F98050A6F10F7BAB8 00:18:20.387 [2024-11-19 16:24:10.594797] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:20.387 [2024-11-19 16:24:10.594834] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:20.387 [2024-11-19 16:24:10.594863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:20.387 request: 00:18:20.387 { 00:18:20.387 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:20.387 "namespace": { 00:18:20.387 "bdev_name": "invalid", 00:18:20.387 "nsid": 1, 00:18:20.387 "nguid": "DBAFCA9A88D3490F98050A6F10F7BAB8", 00:18:20.387 "no_auto_visible": false 00:18:20.387 }, 00:18:20.387 "method": "nvmf_subsystem_add_ns", 00:18:20.387 "req_id": 1 00:18:20.387 } 00:18:20.387 Got JSON-RPC error response 00:18:20.387 response: 00:18:20.387 { 00:18:20.387 "code": -32602, 00:18:20.387 "message": "Invalid parameters" 00:18:20.387 } 00:18:20.387 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:20.387 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:20.387 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:20.387 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:20.387 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid dbafca9a-88d3-490f-9805-0a6f10f7bab8 00:18:20.387 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:20.387 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g DBAFCA9A88D3490F98050A6F10F7BAB8 -i 00:18:20.645 16:24:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:22.555 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:22.555 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:22.555 16:24:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 223969 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 223969 ']' 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 223969 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 223969 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 223969' 00:18:23.123 killing process with pid 223969 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 223969 00:18:23.123 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 223969 00:18:23.381 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:23.640 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:23.640 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:23.640 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:23.640 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:23.640 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:23.640 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:23.640 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:23.640 16:24:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:23.640 rmmod nvme_tcp 00:18:23.899 rmmod nvme_fabrics 00:18:23.899 rmmod nvme_keyring 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 222238 ']' 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 222238 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 222238 ']' 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 222238 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 222238 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 222238' 00:18:23.899 killing process with pid 222238 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 222238 00:18:23.899 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 222238 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:24.159 16:24:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.066 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:26.066 00:18:26.066 real 0m25.101s 00:18:26.066 user 0m36.445s 00:18:26.066 sys 0m4.759s 00:18:26.066 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:26.066 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:26.066 ************************************ 00:18:26.066 END TEST nvmf_ns_masking 00:18:26.066 ************************************ 00:18:26.066 16:24:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:26.066 16:24:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:26.066 16:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:26.066 16:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:26.066 16:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:26.325 ************************************ 00:18:26.325 START TEST nvmf_nvme_cli 00:18:26.325 ************************************ 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:26.325 * Looking for test storage... 00:18:26.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:26.325 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:26.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.326 --rc genhtml_branch_coverage=1 00:18:26.326 --rc genhtml_function_coverage=1 00:18:26.326 --rc genhtml_legend=1 00:18:26.326 --rc geninfo_all_blocks=1 00:18:26.326 --rc geninfo_unexecuted_blocks=1 00:18:26.326 00:18:26.326 ' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:26.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.326 --rc genhtml_branch_coverage=1 00:18:26.326 --rc genhtml_function_coverage=1 00:18:26.326 --rc genhtml_legend=1 00:18:26.326 --rc geninfo_all_blocks=1 00:18:26.326 --rc geninfo_unexecuted_blocks=1 00:18:26.326 00:18:26.326 ' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:26.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.326 --rc genhtml_branch_coverage=1 00:18:26.326 --rc genhtml_function_coverage=1 00:18:26.326 --rc genhtml_legend=1 00:18:26.326 --rc geninfo_all_blocks=1 00:18:26.326 --rc geninfo_unexecuted_blocks=1 00:18:26.326 00:18:26.326 ' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:26.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:26.326 --rc genhtml_branch_coverage=1 00:18:26.326 --rc genhtml_function_coverage=1 00:18:26.326 --rc genhtml_legend=1 00:18:26.326 --rc geninfo_all_blocks=1 00:18:26.326 --rc geninfo_unexecuted_blocks=1 00:18:26.326 00:18:26.326 ' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:26.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:26.326 16:24:16 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:28.862 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:28.862 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:28.862 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:28.862 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:28.862 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:28.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:18:28.863 00:18:28.863 --- 10.0.0.2 ping statistics --- 00:18:28.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.863 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:18:28.863 00:18:28.863 --- 10.0.0.1 ping statistics --- 00:18:28.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.863 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=227390 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 227390 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 227390 ']' 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:28.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:28.863 16:24:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.863 [2024-11-19 16:24:18.913525] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:18:28.863 [2024-11-19 16:24:18.913618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:28.863 [2024-11-19 16:24:18.989512] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:28.863 [2024-11-19 16:24:19.038718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:28.863 [2024-11-19 16:24:19.038771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:28.863 [2024-11-19 16:24:19.038801] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:28.863 [2024-11-19 16:24:19.038812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:28.863 [2024-11-19 16:24:19.038822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:28.863 [2024-11-19 16:24:19.040530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:28.863 [2024-11-19 16:24:19.040591] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:28.863 [2024-11-19 16:24:19.040658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:28.863 [2024-11-19 16:24:19.040660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.863 [2024-11-19 16:24:19.188082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.863 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.172 Malloc0 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.172 Malloc1 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.172 [2024-11-19 16:24:19.296330] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:18:29.172 00:18:29.172 Discovery Log Number of Records 2, Generation counter 2 00:18:29.172 =====Discovery Log Entry 0====== 00:18:29.172 trtype: tcp 00:18:29.172 adrfam: ipv4 00:18:29.172 subtype: current discovery subsystem 00:18:29.172 treq: not required 00:18:29.172 portid: 0 00:18:29.172 trsvcid: 4420 00:18:29.172 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:29.172 traddr: 10.0.0.2 00:18:29.172 eflags: explicit discovery connections, duplicate discovery information 00:18:29.172 sectype: none 00:18:29.172 =====Discovery Log Entry 1====== 00:18:29.172 trtype: tcp 00:18:29.172 adrfam: ipv4 00:18:29.172 subtype: nvme subsystem 00:18:29.172 treq: not required 00:18:29.172 portid: 0 00:18:29.172 trsvcid: 4420 00:18:29.172 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:29.172 traddr: 10.0.0.2 00:18:29.172 eflags: none 00:18:29.172 sectype: none 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:29.172 16:24:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:29.790 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:29.790 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:29.790 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:29.790 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:29.790 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:29.790 16:24:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:32.342 /dev/nvme0n2 ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:32.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:32.342 rmmod nvme_tcp 00:18:32.342 rmmod nvme_fabrics 00:18:32.342 rmmod nvme_keyring 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 227390 ']' 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 227390 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 227390 ']' 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 227390 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 227390 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 227390' 00:18:32.342 killing process with pid 227390 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 227390 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 227390 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:32.342 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:32.343 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:32.343 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:32.343 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.343 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.343 16:24:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:34.891 00:18:34.891 real 0m8.209s 00:18:34.891 user 0m14.785s 00:18:34.891 sys 0m2.329s 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:34.891 ************************************ 00:18:34.891 END TEST nvmf_nvme_cli 00:18:34.891 ************************************ 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:34.891 ************************************ 00:18:34.891 START TEST nvmf_vfio_user 00:18:34.891 ************************************ 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:34.891 * Looking for test storage... 00:18:34.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:34.891 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:34.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.892 --rc genhtml_branch_coverage=1 00:18:34.892 --rc genhtml_function_coverage=1 00:18:34.892 --rc genhtml_legend=1 00:18:34.892 --rc geninfo_all_blocks=1 00:18:34.892 --rc geninfo_unexecuted_blocks=1 00:18:34.892 00:18:34.892 ' 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:34.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.892 --rc genhtml_branch_coverage=1 00:18:34.892 --rc genhtml_function_coverage=1 00:18:34.892 --rc genhtml_legend=1 00:18:34.892 --rc geninfo_all_blocks=1 00:18:34.892 --rc geninfo_unexecuted_blocks=1 00:18:34.892 00:18:34.892 ' 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:34.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.892 --rc genhtml_branch_coverage=1 00:18:34.892 --rc genhtml_function_coverage=1 00:18:34.892 --rc genhtml_legend=1 00:18:34.892 --rc geninfo_all_blocks=1 00:18:34.892 --rc geninfo_unexecuted_blocks=1 00:18:34.892 00:18:34.892 ' 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:34.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:34.892 --rc genhtml_branch_coverage=1 00:18:34.892 --rc genhtml_function_coverage=1 00:18:34.892 --rc genhtml_legend=1 00:18:34.892 --rc geninfo_all_blocks=1 00:18:34.892 --rc geninfo_unexecuted_blocks=1 00:18:34.892 00:18:34.892 ' 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.892 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:34.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=228320 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 228320' 00:18:34.893 Process pid: 228320 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 228320 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 228320 ']' 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.893 16:24:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:34.893 [2024-11-19 16:24:24.886819] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:18:34.893 [2024-11-19 16:24:24.886919] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.893 [2024-11-19 16:24:24.954615] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:34.893 [2024-11-19 16:24:25.003423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.893 [2024-11-19 16:24:25.003482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.893 [2024-11-19 16:24:25.003510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.893 [2024-11-19 16:24:25.003521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.893 [2024-11-19 16:24:25.003530] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.893 [2024-11-19 16:24:25.005172] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.893 [2024-11-19 16:24:25.005231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.893 [2024-11-19 16:24:25.005298] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:34.893 [2024-11-19 16:24:25.005301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.893 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:34.893 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:34.893 16:24:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:35.832 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:36.090 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:36.091 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:36.091 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:36.091 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:36.091 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:36.351 Malloc1 00:18:36.611 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:36.869 16:24:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:37.127 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:37.386 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:37.386 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:37.386 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:37.644 Malloc2 00:18:37.644 16:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:37.903 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:38.160 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:38.423 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:38.423 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:38.423 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:38.423 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:38.423 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:38.423 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:38.423 [2024-11-19 16:24:28.644498] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:18:38.423 [2024-11-19 16:24:28.644541] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid228741 ] 00:18:38.423 [2024-11-19 16:24:28.695216] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:38.423 [2024-11-19 16:24:28.704536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:38.423 [2024-11-19 16:24:28.704564] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9ed669c000 00:18:38.423 [2024-11-19 16:24:28.705528] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:38.423 [2024-11-19 16:24:28.706519] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:38.423 [2024-11-19 16:24:28.707524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:38.423 [2024-11-19 16:24:28.708530] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:38.423 [2024-11-19 16:24:28.709533] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:38.423 [2024-11-19 16:24:28.710535] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:38.423 [2024-11-19 16:24:28.711538] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:38.423 [2024-11-19 16:24:28.712543] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:38.423 [2024-11-19 16:24:28.713547] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:38.423 [2024-11-19 16:24:28.713568] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9ed5394000 00:18:38.423 [2024-11-19 16:24:28.714684] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:38.423 [2024-11-19 16:24:28.730343] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:38.423 [2024-11-19 16:24:28.730402] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:38.423 [2024-11-19 16:24:28.732657] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:38.423 [2024-11-19 16:24:28.732711] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:38.423 [2024-11-19 16:24:28.732798] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:38.423 [2024-11-19 16:24:28.732824] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:38.423 [2024-11-19 16:24:28.732835] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:38.423 [2024-11-19 16:24:28.733649] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:38.423 [2024-11-19 16:24:28.733668] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:38.423 [2024-11-19 16:24:28.733680] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:38.423 [2024-11-19 16:24:28.734653] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:38.423 [2024-11-19 16:24:28.734673] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:38.423 [2024-11-19 16:24:28.734686] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:38.423 [2024-11-19 16:24:28.738079] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:38.423 [2024-11-19 16:24:28.738099] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:38.423 [2024-11-19 16:24:28.738677] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:38.423 [2024-11-19 16:24:28.738695] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:38.423 [2024-11-19 16:24:28.738704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:38.423 [2024-11-19 16:24:28.738715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:38.423 [2024-11-19 16:24:28.738824] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:38.423 [2024-11-19 16:24:28.738832] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:38.423 [2024-11-19 16:24:28.738845] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:38.423 [2024-11-19 16:24:28.739686] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:38.423 [2024-11-19 16:24:28.740686] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:38.423 [2024-11-19 16:24:28.741695] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:38.423 [2024-11-19 16:24:28.742687] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:38.423 [2024-11-19 16:24:28.742797] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:38.423 [2024-11-19 16:24:28.743699] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:38.423 [2024-11-19 16:24:28.743717] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:38.423 [2024-11-19 16:24:28.743725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:38.423 [2024-11-19 16:24:28.743749] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:38.423 [2024-11-19 16:24:28.743764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:38.423 [2024-11-19 16:24:28.743786] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:38.423 [2024-11-19 16:24:28.743796] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:38.423 [2024-11-19 16:24:28.743803] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:38.423 [2024-11-19 16:24:28.743821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:38.423 [2024-11-19 16:24:28.743871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:38.423 [2024-11-19 16:24:28.743887] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:38.423 [2024-11-19 16:24:28.743895] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:38.423 [2024-11-19 16:24:28.743902] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:38.424 [2024-11-19 16:24:28.743909] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:38.424 [2024-11-19 16:24:28.743921] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:38.424 [2024-11-19 16:24:28.743930] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:38.424 [2024-11-19 16:24:28.743937] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.743952] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.743967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.743984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.424 [2024-11-19 16:24:28.744016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.424 [2024-11-19 16:24:28.744028] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.424 [2024-11-19 16:24:28.744040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:38.424 [2024-11-19 16:24:28.744048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744083] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744099] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744142] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:38.424 [2024-11-19 16:24:28.744152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744164] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744284] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744298] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:38.424 [2024-11-19 16:24:28.744306] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:38.424 [2024-11-19 16:24:28.744312] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:38.424 [2024-11-19 16:24:28.744322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744371] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:38.424 [2024-11-19 16:24:28.744392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744419] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:38.424 [2024-11-19 16:24:28.744444] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:38.424 [2024-11-19 16:24:28.744451] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:38.424 [2024-11-19 16:24:28.744460] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744509] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744524] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744535] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:38.424 [2024-11-19 16:24:28.744543] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:38.424 [2024-11-19 16:24:28.744549] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:38.424 [2024-11-19 16:24:28.744558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744586] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744597] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744611] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744621] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744638] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744646] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:38.424 [2024-11-19 16:24:28.744653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:38.424 [2024-11-19 16:24:28.744662] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:38.424 [2024-11-19 16:24:28.744687] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744723] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744750] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744815] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:38.424 [2024-11-19 16:24:28.744825] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:38.424 [2024-11-19 16:24:28.744831] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:38.424 [2024-11-19 16:24:28.744837] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:38.424 [2024-11-19 16:24:28.744843] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:38.424 [2024-11-19 16:24:28.744852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:38.424 [2024-11-19 16:24:28.744863] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:38.424 [2024-11-19 16:24:28.744871] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:38.424 [2024-11-19 16:24:28.744877] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:38.424 [2024-11-19 16:24:28.744886] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744896] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:38.424 [2024-11-19 16:24:28.744904] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:38.424 [2024-11-19 16:24:28.744910] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:38.424 [2024-11-19 16:24:28.744918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744930] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:38.424 [2024-11-19 16:24:28.744937] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:38.424 [2024-11-19 16:24:28.744943] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:38.424 [2024-11-19 16:24:28.744952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:38.424 [2024-11-19 16:24:28.744963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:38.424 [2024-11-19 16:24:28.744985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:38.425 [2024-11-19 16:24:28.745002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:38.425 [2024-11-19 16:24:28.745014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:38.425 ===================================================== 00:18:38.425 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:38.425 ===================================================== 00:18:38.425 Controller Capabilities/Features 00:18:38.425 ================================ 00:18:38.425 Vendor ID: 4e58 00:18:38.425 Subsystem Vendor ID: 4e58 00:18:38.425 Serial Number: SPDK1 00:18:38.425 Model Number: SPDK bdev Controller 00:18:38.425 Firmware Version: 25.01 00:18:38.425 Recommended Arb Burst: 6 00:18:38.425 IEEE OUI Identifier: 8d 6b 50 00:18:38.425 Multi-path I/O 00:18:38.425 May have multiple subsystem ports: Yes 00:18:38.425 May have multiple controllers: Yes 00:18:38.425 Associated with SR-IOV VF: No 00:18:38.425 Max Data Transfer Size: 131072 00:18:38.425 Max Number of Namespaces: 32 00:18:38.425 Max Number of I/O Queues: 127 00:18:38.425 NVMe Specification Version (VS): 1.3 00:18:38.425 NVMe Specification Version (Identify): 1.3 00:18:38.425 Maximum Queue Entries: 256 00:18:38.425 Contiguous Queues Required: Yes 00:18:38.425 Arbitration Mechanisms Supported 00:18:38.425 Weighted Round Robin: Not Supported 00:18:38.425 Vendor Specific: Not Supported 00:18:38.425 Reset Timeout: 15000 ms 00:18:38.425 Doorbell Stride: 4 bytes 00:18:38.425 NVM Subsystem Reset: Not Supported 00:18:38.425 Command Sets Supported 00:18:38.425 NVM Command Set: Supported 00:18:38.425 Boot Partition: Not Supported 00:18:38.425 Memory Page Size Minimum: 4096 bytes 00:18:38.425 Memory Page Size Maximum: 4096 bytes 00:18:38.425 Persistent Memory Region: Not Supported 00:18:38.425 Optional Asynchronous Events Supported 00:18:38.425 Namespace Attribute Notices: Supported 00:18:38.425 Firmware Activation Notices: Not Supported 00:18:38.425 ANA Change Notices: Not Supported 00:18:38.425 PLE Aggregate Log Change Notices: Not Supported 00:18:38.425 LBA Status Info Alert Notices: Not Supported 00:18:38.425 EGE Aggregate Log Change Notices: Not Supported 00:18:38.425 Normal NVM Subsystem Shutdown event: Not Supported 00:18:38.425 Zone Descriptor Change Notices: Not Supported 00:18:38.425 Discovery Log Change Notices: Not Supported 00:18:38.425 Controller Attributes 00:18:38.425 128-bit Host Identifier: Supported 00:18:38.425 Non-Operational Permissive Mode: Not Supported 00:18:38.425 NVM Sets: Not Supported 00:18:38.425 Read Recovery Levels: Not Supported 00:18:38.425 Endurance Groups: Not Supported 00:18:38.425 Predictable Latency Mode: Not Supported 00:18:38.425 Traffic Based Keep ALive: Not Supported 00:18:38.425 Namespace Granularity: Not Supported 00:18:38.425 SQ Associations: Not Supported 00:18:38.425 UUID List: Not Supported 00:18:38.425 Multi-Domain Subsystem: Not Supported 00:18:38.425 Fixed Capacity Management: Not Supported 00:18:38.425 Variable Capacity Management: Not Supported 00:18:38.425 Delete Endurance Group: Not Supported 00:18:38.425 Delete NVM Set: Not Supported 00:18:38.425 Extended LBA Formats Supported: Not Supported 00:18:38.425 Flexible Data Placement Supported: Not Supported 00:18:38.425 00:18:38.425 Controller Memory Buffer Support 00:18:38.425 ================================ 00:18:38.425 Supported: No 00:18:38.425 00:18:38.425 Persistent Memory Region Support 00:18:38.425 ================================ 00:18:38.425 Supported: No 00:18:38.425 00:18:38.425 Admin Command Set Attributes 00:18:38.425 ============================ 00:18:38.425 Security Send/Receive: Not Supported 00:18:38.425 Format NVM: Not Supported 00:18:38.425 Firmware Activate/Download: Not Supported 00:18:38.425 Namespace Management: Not Supported 00:18:38.425 Device Self-Test: Not Supported 00:18:38.425 Directives: Not Supported 00:18:38.425 NVMe-MI: Not Supported 00:18:38.425 Virtualization Management: Not Supported 00:18:38.425 Doorbell Buffer Config: Not Supported 00:18:38.425 Get LBA Status Capability: Not Supported 00:18:38.425 Command & Feature Lockdown Capability: Not Supported 00:18:38.425 Abort Command Limit: 4 00:18:38.425 Async Event Request Limit: 4 00:18:38.425 Number of Firmware Slots: N/A 00:18:38.425 Firmware Slot 1 Read-Only: N/A 00:18:38.425 Firmware Activation Without Reset: N/A 00:18:38.425 Multiple Update Detection Support: N/A 00:18:38.425 Firmware Update Granularity: No Information Provided 00:18:38.425 Per-Namespace SMART Log: No 00:18:38.425 Asymmetric Namespace Access Log Page: Not Supported 00:18:38.425 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:38.425 Command Effects Log Page: Supported 00:18:38.425 Get Log Page Extended Data: Supported 00:18:38.425 Telemetry Log Pages: Not Supported 00:18:38.425 Persistent Event Log Pages: Not Supported 00:18:38.425 Supported Log Pages Log Page: May Support 00:18:38.425 Commands Supported & Effects Log Page: Not Supported 00:18:38.425 Feature Identifiers & Effects Log Page:May Support 00:18:38.425 NVMe-MI Commands & Effects Log Page: May Support 00:18:38.425 Data Area 4 for Telemetry Log: Not Supported 00:18:38.425 Error Log Page Entries Supported: 128 00:18:38.425 Keep Alive: Supported 00:18:38.425 Keep Alive Granularity: 10000 ms 00:18:38.425 00:18:38.425 NVM Command Set Attributes 00:18:38.425 ========================== 00:18:38.425 Submission Queue Entry Size 00:18:38.425 Max: 64 00:18:38.425 Min: 64 00:18:38.425 Completion Queue Entry Size 00:18:38.425 Max: 16 00:18:38.425 Min: 16 00:18:38.425 Number of Namespaces: 32 00:18:38.425 Compare Command: Supported 00:18:38.425 Write Uncorrectable Command: Not Supported 00:18:38.425 Dataset Management Command: Supported 00:18:38.425 Write Zeroes Command: Supported 00:18:38.425 Set Features Save Field: Not Supported 00:18:38.425 Reservations: Not Supported 00:18:38.425 Timestamp: Not Supported 00:18:38.425 Copy: Supported 00:18:38.425 Volatile Write Cache: Present 00:18:38.425 Atomic Write Unit (Normal): 1 00:18:38.425 Atomic Write Unit (PFail): 1 00:18:38.425 Atomic Compare & Write Unit: 1 00:18:38.425 Fused Compare & Write: Supported 00:18:38.425 Scatter-Gather List 00:18:38.425 SGL Command Set: Supported (Dword aligned) 00:18:38.425 SGL Keyed: Not Supported 00:18:38.425 SGL Bit Bucket Descriptor: Not Supported 00:18:38.425 SGL Metadata Pointer: Not Supported 00:18:38.425 Oversized SGL: Not Supported 00:18:38.425 SGL Metadata Address: Not Supported 00:18:38.425 SGL Offset: Not Supported 00:18:38.425 Transport SGL Data Block: Not Supported 00:18:38.425 Replay Protected Memory Block: Not Supported 00:18:38.425 00:18:38.425 Firmware Slot Information 00:18:38.425 ========================= 00:18:38.425 Active slot: 1 00:18:38.425 Slot 1 Firmware Revision: 25.01 00:18:38.425 00:18:38.425 00:18:38.425 Commands Supported and Effects 00:18:38.425 ============================== 00:18:38.425 Admin Commands 00:18:38.425 -------------- 00:18:38.425 Get Log Page (02h): Supported 00:18:38.425 Identify (06h): Supported 00:18:38.425 Abort (08h): Supported 00:18:38.425 Set Features (09h): Supported 00:18:38.425 Get Features (0Ah): Supported 00:18:38.425 Asynchronous Event Request (0Ch): Supported 00:18:38.425 Keep Alive (18h): Supported 00:18:38.425 I/O Commands 00:18:38.425 ------------ 00:18:38.425 Flush (00h): Supported LBA-Change 00:18:38.425 Write (01h): Supported LBA-Change 00:18:38.425 Read (02h): Supported 00:18:38.425 Compare (05h): Supported 00:18:38.425 Write Zeroes (08h): Supported LBA-Change 00:18:38.425 Dataset Management (09h): Supported LBA-Change 00:18:38.425 Copy (19h): Supported LBA-Change 00:18:38.425 00:18:38.425 Error Log 00:18:38.425 ========= 00:18:38.425 00:18:38.425 Arbitration 00:18:38.425 =========== 00:18:38.425 Arbitration Burst: 1 00:18:38.425 00:18:38.425 Power Management 00:18:38.425 ================ 00:18:38.425 Number of Power States: 1 00:18:38.425 Current Power State: Power State #0 00:18:38.425 Power State #0: 00:18:38.425 Max Power: 0.00 W 00:18:38.425 Non-Operational State: Operational 00:18:38.425 Entry Latency: Not Reported 00:18:38.425 Exit Latency: Not Reported 00:18:38.425 Relative Read Throughput: 0 00:18:38.425 Relative Read Latency: 0 00:18:38.425 Relative Write Throughput: 0 00:18:38.425 Relative Write Latency: 0 00:18:38.425 Idle Power: Not Reported 00:18:38.425 Active Power: Not Reported 00:18:38.425 Non-Operational Permissive Mode: Not Supported 00:18:38.426 00:18:38.426 Health Information 00:18:38.426 ================== 00:18:38.426 Critical Warnings: 00:18:38.426 Available Spare Space: OK 00:18:38.426 Temperature: OK 00:18:38.426 Device Reliability: OK 00:18:38.426 Read Only: No 00:18:38.426 Volatile Memory Backup: OK 00:18:38.426 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:38.426 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:38.426 Available Spare: 0% 00:18:38.426 Available Sp[2024-11-19 16:24:28.745155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:38.426 [2024-11-19 16:24:28.745173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:38.426 [2024-11-19 16:24:28.745215] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:38.426 [2024-11-19 16:24:28.745233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.426 [2024-11-19 16:24:28.745249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.426 [2024-11-19 16:24:28.745259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.426 [2024-11-19 16:24:28.745269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:38.426 [2024-11-19 16:24:28.745726] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:38.426 [2024-11-19 16:24:28.745744] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:38.426 [2024-11-19 16:24:28.746711] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:38.426 [2024-11-19 16:24:28.746802] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:38.426 [2024-11-19 16:24:28.746817] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:38.426 [2024-11-19 16:24:28.747722] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:38.426 [2024-11-19 16:24:28.747744] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:38.426 [2024-11-19 16:24:28.747797] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:38.426 [2024-11-19 16:24:28.751081] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:38.685 are Threshold: 0% 00:18:38.685 Life Percentage Used: 0% 00:18:38.685 Data Units Read: 0 00:18:38.685 Data Units Written: 0 00:18:38.685 Host Read Commands: 0 00:18:38.685 Host Write Commands: 0 00:18:38.685 Controller Busy Time: 0 minutes 00:18:38.685 Power Cycles: 0 00:18:38.685 Power On Hours: 0 hours 00:18:38.685 Unsafe Shutdowns: 0 00:18:38.685 Unrecoverable Media Errors: 0 00:18:38.685 Lifetime Error Log Entries: 0 00:18:38.685 Warning Temperature Time: 0 minutes 00:18:38.685 Critical Temperature Time: 0 minutes 00:18:38.685 00:18:38.685 Number of Queues 00:18:38.685 ================ 00:18:38.685 Number of I/O Submission Queues: 127 00:18:38.685 Number of I/O Completion Queues: 127 00:18:38.685 00:18:38.685 Active Namespaces 00:18:38.685 ================= 00:18:38.685 Namespace ID:1 00:18:38.685 Error Recovery Timeout: Unlimited 00:18:38.685 Command Set Identifier: NVM (00h) 00:18:38.685 Deallocate: Supported 00:18:38.685 Deallocated/Unwritten Error: Not Supported 00:18:38.685 Deallocated Read Value: Unknown 00:18:38.685 Deallocate in Write Zeroes: Not Supported 00:18:38.685 Deallocated Guard Field: 0xFFFF 00:18:38.685 Flush: Supported 00:18:38.685 Reservation: Supported 00:18:38.685 Namespace Sharing Capabilities: Multiple Controllers 00:18:38.685 Size (in LBAs): 131072 (0GiB) 00:18:38.685 Capacity (in LBAs): 131072 (0GiB) 00:18:38.686 Utilization (in LBAs): 131072 (0GiB) 00:18:38.686 NGUID: 973799FCD21242BD86CFDC25CCD52000 00:18:38.686 UUID: 973799fc-d212-42bd-86cf-dc25ccd52000 00:18:38.686 Thin Provisioning: Not Supported 00:18:38.686 Per-NS Atomic Units: Yes 00:18:38.686 Atomic Boundary Size (Normal): 0 00:18:38.686 Atomic Boundary Size (PFail): 0 00:18:38.686 Atomic Boundary Offset: 0 00:18:38.686 Maximum Single Source Range Length: 65535 00:18:38.686 Maximum Copy Length: 65535 00:18:38.686 Maximum Source Range Count: 1 00:18:38.686 NGUID/EUI64 Never Reused: No 00:18:38.686 Namespace Write Protected: No 00:18:38.686 Number of LBA Formats: 1 00:18:38.686 Current LBA Format: LBA Format #00 00:18:38.686 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:38.686 00:18:38.686 16:24:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:38.686 [2024-11-19 16:24:28.994951] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:43.968 Initializing NVMe Controllers 00:18:43.968 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:43.968 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:43.968 Initialization complete. Launching workers. 00:18:43.968 ======================================================== 00:18:43.968 Latency(us) 00:18:43.968 Device Information : IOPS MiB/s Average min max 00:18:43.968 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 32543.00 127.12 3933.05 1212.10 9665.24 00:18:43.968 ======================================================== 00:18:43.968 Total : 32543.00 127.12 3933.05 1212.10 9665.24 00:18:43.968 00:18:43.968 [2024-11-19 16:24:34.017636] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:43.968 16:24:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:43.968 [2024-11-19 16:24:34.278816] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:49.265 Initializing NVMe Controllers 00:18:49.265 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:49.265 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:49.265 Initialization complete. Launching workers. 00:18:49.265 ======================================================== 00:18:49.265 Latency(us) 00:18:49.265 Device Information : IOPS MiB/s Average min max 00:18:49.265 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15974.40 62.40 8021.18 6953.20 15986.10 00:18:49.265 ======================================================== 00:18:49.265 Total : 15974.40 62.40 8021.18 6953.20 15986.10 00:18:49.265 00:18:49.265 [2024-11-19 16:24:39.315244] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:49.265 16:24:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:49.265 [2024-11-19 16:24:39.545389] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:54.545 [2024-11-19 16:24:44.624380] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:54.545 Initializing NVMe Controllers 00:18:54.546 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:54.546 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:54.546 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:54.546 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:54.546 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:54.546 Initialization complete. Launching workers. 00:18:54.546 Starting thread on core 2 00:18:54.546 Starting thread on core 3 00:18:54.546 Starting thread on core 1 00:18:54.546 16:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:54.804 [2024-11-19 16:24:44.938568] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:58.100 [2024-11-19 16:24:48.007015] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:58.100 Initializing NVMe Controllers 00:18:58.100 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:58.101 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:58.101 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:58.101 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:58.101 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:58.101 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:58.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:58.101 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:58.101 Initialization complete. Launching workers. 00:18:58.101 Starting thread on core 1 with urgent priority queue 00:18:58.101 Starting thread on core 2 with urgent priority queue 00:18:58.101 Starting thread on core 3 with urgent priority queue 00:18:58.101 Starting thread on core 0 with urgent priority queue 00:18:58.101 SPDK bdev Controller (SPDK1 ) core 0: 4603.00 IO/s 21.72 secs/100000 ios 00:18:58.101 SPDK bdev Controller (SPDK1 ) core 1: 5491.67 IO/s 18.21 secs/100000 ios 00:18:58.101 SPDK bdev Controller (SPDK1 ) core 2: 5420.67 IO/s 18.45 secs/100000 ios 00:18:58.101 SPDK bdev Controller (SPDK1 ) core 3: 5397.33 IO/s 18.53 secs/100000 ios 00:18:58.101 ======================================================== 00:18:58.101 00:18:58.101 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:58.101 [2024-11-19 16:24:48.324549] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:58.101 Initializing NVMe Controllers 00:18:58.101 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:58.101 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:58.101 Namespace ID: 1 size: 0GB 00:18:58.101 Initialization complete. 00:18:58.101 INFO: using host memory buffer for IO 00:18:58.101 Hello world! 00:18:58.101 [2024-11-19 16:24:48.359141] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:58.101 16:24:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:58.360 [2024-11-19 16:24:48.674592] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:59.740 Initializing NVMe Controllers 00:18:59.740 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:59.740 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:59.740 Initialization complete. Launching workers. 00:18:59.740 submit (in ns) avg, min, max = 6679.7, 3487.8, 4017142.2 00:18:59.740 complete (in ns) avg, min, max = 26821.7, 2077.8, 7009308.9 00:18:59.740 00:18:59.740 Submit histogram 00:18:59.740 ================ 00:18:59.740 Range in us Cumulative Count 00:18:59.740 3.484 - 3.508: 0.1535% ( 20) 00:18:59.740 3.508 - 3.532: 0.9288% ( 101) 00:18:59.740 3.532 - 3.556: 2.5637% ( 213) 00:18:59.740 3.556 - 3.579: 6.6856% ( 537) 00:18:59.740 3.579 - 3.603: 13.2100% ( 850) 00:18:59.740 3.603 - 3.627: 22.6205% ( 1226) 00:18:59.740 3.627 - 3.650: 32.4609% ( 1282) 00:18:59.740 3.650 - 3.674: 40.0829% ( 993) 00:18:59.740 3.674 - 3.698: 47.4209% ( 956) 00:18:59.740 3.698 - 3.721: 54.6592% ( 943) 00:18:59.740 3.721 - 3.745: 60.5388% ( 766) 00:18:59.740 3.745 - 3.769: 65.2901% ( 619) 00:18:59.740 3.769 - 3.793: 69.1434% ( 502) 00:18:59.740 3.793 - 3.816: 72.6819% ( 461) 00:18:59.740 3.816 - 3.840: 75.8060% ( 407) 00:18:59.740 3.840 - 3.864: 78.9914% ( 415) 00:18:59.740 3.864 - 3.887: 82.3764% ( 441) 00:18:59.740 3.887 - 3.911: 85.4160% ( 396) 00:18:59.740 3.911 - 3.935: 87.4731% ( 268) 00:18:59.740 3.935 - 3.959: 89.1465% ( 218) 00:18:59.740 3.959 - 3.982: 90.8582% ( 223) 00:18:59.740 3.982 - 4.006: 92.4087% ( 202) 00:18:59.740 4.006 - 4.030: 93.7135% ( 170) 00:18:59.740 4.030 - 4.053: 94.6577% ( 123) 00:18:59.740 4.053 - 4.077: 95.3485% ( 90) 00:18:59.740 4.077 - 4.101: 95.9395% ( 77) 00:18:59.740 4.101 - 4.124: 96.2926% ( 46) 00:18:59.740 4.124 - 4.148: 96.5536% ( 34) 00:18:59.740 4.148 - 4.172: 96.7608% ( 27) 00:18:59.740 4.172 - 4.196: 96.9067% ( 19) 00:18:59.740 4.196 - 4.219: 96.9834% ( 10) 00:18:59.740 4.219 - 4.243: 97.1062% ( 16) 00:18:59.740 4.243 - 4.267: 97.2214% ( 15) 00:18:59.740 4.267 - 4.290: 97.3212% ( 13) 00:18:59.740 4.290 - 4.314: 97.3672% ( 6) 00:18:59.740 4.314 - 4.338: 97.4056% ( 5) 00:18:59.740 4.338 - 4.361: 97.4209% ( 2) 00:18:59.740 4.361 - 4.385: 97.4593% ( 5) 00:18:59.740 4.385 - 4.409: 97.5207% ( 8) 00:18:59.740 4.456 - 4.480: 97.5284% ( 1) 00:18:59.740 4.480 - 4.504: 97.5361% ( 1) 00:18:59.740 4.504 - 4.527: 97.5438% ( 1) 00:18:59.740 4.527 - 4.551: 97.5591% ( 2) 00:18:59.740 4.622 - 4.646: 97.5668% ( 1) 00:18:59.740 4.670 - 4.693: 97.6128% ( 6) 00:18:59.740 4.693 - 4.717: 97.6589% ( 6) 00:18:59.740 4.717 - 4.741: 97.6819% ( 3) 00:18:59.740 4.741 - 4.764: 97.6973% ( 2) 00:18:59.740 4.764 - 4.788: 97.7126% ( 2) 00:18:59.740 4.788 - 4.812: 97.7433% ( 4) 00:18:59.740 4.812 - 4.836: 97.8124% ( 9) 00:18:59.740 4.836 - 4.859: 97.8431% ( 4) 00:18:59.740 4.859 - 4.883: 97.9199% ( 10) 00:18:59.740 4.883 - 4.907: 97.9352% ( 2) 00:18:59.740 4.907 - 4.930: 97.9736% ( 5) 00:18:59.740 4.930 - 4.954: 98.0043% ( 4) 00:18:59.740 4.954 - 4.978: 98.0504% ( 6) 00:18:59.740 4.978 - 5.001: 98.0657% ( 2) 00:18:59.740 5.001 - 5.025: 98.1118% ( 6) 00:18:59.740 5.025 - 5.049: 98.1271% ( 2) 00:18:59.740 5.049 - 5.073: 98.1655% ( 5) 00:18:59.740 5.073 - 5.096: 98.1808% ( 2) 00:18:59.740 5.096 - 5.120: 98.2115% ( 4) 00:18:59.740 5.120 - 5.144: 98.2192% ( 1) 00:18:59.740 5.144 - 5.167: 98.2422% ( 3) 00:18:59.740 5.167 - 5.191: 98.2576% ( 2) 00:18:59.740 5.215 - 5.239: 98.2653% ( 1) 00:18:59.740 5.239 - 5.262: 98.2730% ( 1) 00:18:59.740 5.262 - 5.286: 98.2806% ( 1) 00:18:59.740 5.286 - 5.310: 98.3113% ( 4) 00:18:59.740 5.310 - 5.333: 98.3190% ( 1) 00:18:59.740 5.357 - 5.381: 98.3267% ( 1) 00:18:59.740 5.381 - 5.404: 98.3344% ( 1) 00:18:59.740 5.476 - 5.499: 98.3420% ( 1) 00:18:59.740 5.570 - 5.594: 98.3497% ( 1) 00:18:59.740 5.618 - 5.641: 98.3651% ( 2) 00:18:59.740 5.689 - 5.713: 98.3727% ( 1) 00:18:59.740 5.736 - 5.760: 98.3804% ( 1) 00:18:59.740 5.760 - 5.784: 98.3881% ( 1) 00:18:59.740 5.784 - 5.807: 98.3958% ( 1) 00:18:59.740 5.926 - 5.950: 98.4034% ( 1) 00:18:59.740 5.950 - 5.973: 98.4111% ( 1) 00:18:59.740 6.068 - 6.116: 98.4341% ( 3) 00:18:59.740 6.116 - 6.163: 98.4418% ( 1) 00:18:59.740 6.400 - 6.447: 98.4495% ( 1) 00:18:59.740 6.495 - 6.542: 98.4725% ( 3) 00:18:59.740 6.542 - 6.590: 98.4802% ( 1) 00:18:59.740 6.732 - 6.779: 98.4879% ( 1) 00:18:59.740 6.779 - 6.827: 98.4955% ( 1) 00:18:59.740 6.969 - 7.016: 98.5032% ( 1) 00:18:59.740 7.206 - 7.253: 98.5109% ( 1) 00:18:59.740 7.301 - 7.348: 98.5186% ( 1) 00:18:59.740 7.490 - 7.538: 98.5263% ( 1) 00:18:59.740 7.585 - 7.633: 98.5416% ( 2) 00:18:59.740 7.775 - 7.822: 98.5493% ( 1) 00:18:59.740 7.822 - 7.870: 98.5570% ( 1) 00:18:59.740 7.964 - 8.012: 98.5723% ( 2) 00:18:59.740 8.012 - 8.059: 98.5953% ( 3) 00:18:59.740 8.059 - 8.107: 98.6030% ( 1) 00:18:59.740 8.107 - 8.154: 98.6107% ( 1) 00:18:59.740 8.154 - 8.201: 98.6184% ( 1) 00:18:59.740 8.201 - 8.249: 98.6260% ( 1) 00:18:59.740 8.296 - 8.344: 98.6337% ( 1) 00:18:59.740 8.533 - 8.581: 98.6491% ( 2) 00:18:59.740 8.581 - 8.628: 98.6644% ( 2) 00:18:59.740 8.628 - 8.676: 98.6721% ( 1) 00:18:59.740 8.676 - 8.723: 98.6798% ( 1) 00:18:59.740 8.865 - 8.913: 98.6874% ( 1) 00:18:59.740 8.960 - 9.007: 98.6951% ( 1) 00:18:59.740 9.007 - 9.055: 98.7028% ( 1) 00:18:59.740 9.055 - 9.102: 98.7181% ( 2) 00:18:59.740 9.102 - 9.150: 98.7335% ( 2) 00:18:59.740 9.150 - 9.197: 98.7412% ( 1) 00:18:59.740 9.197 - 9.244: 98.7565% ( 2) 00:18:59.740 9.244 - 9.292: 98.7642% ( 1) 00:18:59.740 9.292 - 9.339: 98.7719% ( 1) 00:18:59.740 9.339 - 9.387: 98.7872% ( 2) 00:18:59.740 9.434 - 9.481: 98.7949% ( 1) 00:18:59.740 9.576 - 9.624: 98.8026% ( 1) 00:18:59.740 9.624 - 9.671: 98.8179% ( 2) 00:18:59.740 9.719 - 9.766: 98.8333% ( 2) 00:18:59.740 9.861 - 9.908: 98.8410% ( 1) 00:18:59.740 10.003 - 10.050: 98.8486% ( 1) 00:18:59.740 10.193 - 10.240: 98.8640% ( 2) 00:18:59.740 10.240 - 10.287: 98.8717% ( 1) 00:18:59.740 10.477 - 10.524: 98.8870% ( 2) 00:18:59.740 10.667 - 10.714: 98.8947% ( 1) 00:18:59.740 10.999 - 11.046: 98.9024% ( 1) 00:18:59.740 11.141 - 11.188: 98.9100% ( 1) 00:18:59.740 11.188 - 11.236: 98.9177% ( 1) 00:18:59.740 11.567 - 11.615: 98.9254% ( 1) 00:18:59.740 11.852 - 11.899: 98.9331% ( 1) 00:18:59.740 12.610 - 12.705: 98.9484% ( 2) 00:18:59.740 12.895 - 12.990: 98.9714% ( 3) 00:18:59.740 12.990 - 13.084: 98.9791% ( 1) 00:18:59.740 13.179 - 13.274: 98.9868% ( 1) 00:18:59.741 13.464 - 13.559: 98.9945% ( 1) 00:18:59.741 13.653 - 13.748: 99.0098% ( 2) 00:18:59.741 13.843 - 13.938: 99.0175% ( 1) 00:18:59.741 14.127 - 14.222: 99.0252% ( 1) 00:18:59.741 14.412 - 14.507: 99.0329% ( 1) 00:18:59.741 14.507 - 14.601: 99.0482% ( 2) 00:18:59.741 14.601 - 14.696: 99.0559% ( 1) 00:18:59.741 15.170 - 15.265: 99.0636% ( 1) 00:18:59.741 16.782 - 16.877: 99.0712% ( 1) 00:18:59.741 17.161 - 17.256: 99.0789% ( 1) 00:18:59.741 17.256 - 17.351: 99.0866% ( 1) 00:18:59.741 17.351 - 17.446: 99.1403% ( 7) 00:18:59.741 17.446 - 17.541: 99.1787% ( 5) 00:18:59.741 17.541 - 17.636: 99.2247% ( 6) 00:18:59.741 17.636 - 17.730: 99.2938% ( 9) 00:18:59.741 17.730 - 17.825: 99.3552% ( 8) 00:18:59.741 17.825 - 17.920: 99.3859% ( 4) 00:18:59.741 17.920 - 18.015: 99.4320% ( 6) 00:18:59.741 18.015 - 18.110: 99.4704% ( 5) 00:18:59.741 18.110 - 18.204: 99.5318% ( 8) 00:18:59.741 18.204 - 18.299: 99.5625% ( 4) 00:18:59.741 18.299 - 18.394: 99.6085% ( 6) 00:18:59.741 18.394 - 18.489: 99.6930% ( 11) 00:18:59.741 18.489 - 18.584: 99.7544% ( 8) 00:18:59.741 18.584 - 18.679: 99.7774% ( 3) 00:18:59.741 18.679 - 18.773: 99.8004% ( 3) 00:18:59.741 18.773 - 18.868: 99.8235% ( 3) 00:18:59.741 18.868 - 18.963: 99.8618% ( 5) 00:18:59.741 19.247 - 19.342: 99.8772% ( 2) 00:18:59.741 19.342 - 19.437: 99.8925% ( 2) 00:18:59.741 19.532 - 19.627: 99.9002% ( 1) 00:18:59.741 19.627 - 19.721: 99.9079% ( 1) 00:18:59.741 22.471 - 22.566: 99.9156% ( 1) 00:18:59.741 27.496 - 27.686: 99.9232% ( 1) 00:18:59.741 28.634 - 28.824: 99.9309% ( 1) 00:18:59.741 3980.705 - 4004.978: 99.9693% ( 5) 00:18:59.741 4004.978 - 4029.250: 100.0000% ( 4) 00:18:59.741 00:18:59.741 Complete histogram 00:18:59.741 ================== 00:18:59.741 Range in us Cumulative Count 00:18:59.741 2.074 - 2.086: 4.0068% ( 522) 00:18:59.741 2.086 - 2.098: 39.4995% ( 4624) 00:18:59.741 2.098 - 2.110: 48.4188% ( 1162) 00:18:59.741 2.110 - 2.121: 53.2776% ( 633) 00:18:59.741 2.121 - 2.133: 59.6101% ( 825) 00:18:59.741 2.133 - 2.145: 61.4062% ( 234) 00:18:59.741 2.145 - 2.157: 67.5315% ( 798) 00:18:59.741 2.157 - 2.169: 80.9411% ( 1747) 00:18:59.741 2.169 - 2.181: 83.4203% ( 323) 00:18:59.741 2.181 - 2.193: 85.8766% ( 320) 00:18:59.741 2.193 - 2.204: 88.6399% ( 360) 00:18:59.741 2.204 - 2.216: 89.4688% ( 108) 00:18:59.741 2.216 - 2.228: 90.6663% ( 156) 00:18:59.741 2.228 - 2.240: 92.1554% ( 194) 00:18:59.741 2.240 - 2.252: 94.0282% ( 244) 00:18:59.741 2.252 - 2.264: 94.8496% ( 107) 00:18:59.741 2.264 - 2.276: 95.0952% ( 32) 00:18:59.741 2.276 - 2.287: 95.2640% ( 22) 00:18:59.741 2.287 - 2.299: 95.4022% ( 18) 00:18:59.741 2.299 - 2.311: 95.5864% ( 24) 00:18:59.741 2.311 - 2.323: 95.9242% ( 44) 00:18:59.741 2.323 - 2.335: 96.1468% ( 29) 00:18:59.741 2.335 - 2.347: 96.1928% ( 6) 00:18:59.741 2.347 - 2.359: 96.2465% ( 7) 00:18:59.741 2.359 - 2.370: 96.2696% ( 3) 00:18:59.741 2.370 - 2.382: 96.3463% ( 10) 00:18:59.741 2.382 - 2.394: 96.5152% ( 22) 00:18:59.741 2.394 - 2.406: 96.7762% ( 34) 00:18:59.741 2.406 - 2.418: 97.0218% ( 32) 00:18:59.741 2.418 - 2.430: 97.2905% ( 35) 00:18:59.741 2.430 - 2.441: 97.5130% ( 29) 00:18:59.741 2.441 - 2.453: 97.6896% ( 23) 00:18:59.741 2.453 - 2.465: 97.9506% ( 34) 00:18:59.741 2.465 - 2.477: 98.1041% ( 20) 00:18:59.741 2.477 - 2.489: 98.2039% ( 13) 00:18:59.741 2.489 - 2.501: 98.2960% ( 12) 00:18:59.741 2.501 - 2.513: 98.3727% ( 10) 00:18:59.741 2.513 - 2.524: 98.4341% ( 8) 00:18:59.741 2.524 - 2.536: 98.4648% ( 4) 00:18:59.741 2.536 - 2.548: 98.4879% ( 3) 00:18:59.741 2.548 - 2.560: 98.5032% ( 2) 00:18:59.741 2.572 - 2.584: 98.5109% ( 1) 00:18:59.741 2.584 - 2.596: 98.5339% ( 3) 00:18:59.741 2.631 - 2.643: 98.5570% ( 3) 00:18:59.741 2.643 - 2.655: 98.5646% ( 1) 00:18:59.741 2.726 - 2.738: 98.5723% ( 1) 00:18:59.741 2.797 - 2.809: 98.5800% ( 1) 00:18:59.741 3.461 - 3.484: 98.5877% ( 1) 00:18:59.741 3.484 - 3.508: 98.5953% ( 1) 00:18:59.741 3.508 - 3.532: 98.6184% ( 3) 00:18:59.741 3.579 - 3.603: 98.6337% ( 2) 00:18:59.741 3.603 - 3.627: 98.6414% ( 1) 00:18:59.741 3.627 - 3.650: 98.6491% ( 1) 00:18:59.741 3.745 - 3.769: 98.6567% ( 1) 00:18:59.741 3.887 - 3.911: 98.6644% ( 1) 00:18:59.741 4.006 - 4.030: 98.6721% ( 1) 00:18:59.741 4.030 - 4.053: 98.6798% ( 1) 00:18:59.741 4.053 - 4.077: 98.6874% ( 1) 00:18:59.741 4.290 - 4.314: 98.6951% ( 1) 00:18:59.741 5.760 - 5.784: 98.7028% ( 1) 00:18:59.741 6.116 - 6.163: 98.7105% ( 1) 00:18:59.741 6.210 - 6.258: 98.7181% ( 1) 00:18:59.741 6.400 - 6.447: 98.7258% ( 1) 00:18:59.741 6.495 - 6.542: 98.7335% ( 1) 00:18:59.741 6.542 - 6.590: 98.7412% ( 1) 00:18:59.741 6.684 - 6.732: 98.7488% ( 1) 00:18:59.741 6.874 - 6.921: 98.7642% ( 2) 00:18:59.741 7.016 - 7.064: 98.7719% ( 1) 00:18:59.741 7.159 - 7.206: 98.7796% ( 1) 00:18:59.741 7.538 - 7.585: 98.7872% ( 1) 00:18:59.741 7.633 - 7.680: 98.7949% ( 1) 00:18:59.741 7.775 - 7.822: 98.8026% ( 1) 00:18:59.741 7.917 - 7.964: 98.8103% ( 1) 00:18:59.741 8.012 - 8.059: 98.8179% ( 1) 00:18:59.741 8.154 - 8.201: 98.8256% ( 1) 00:18:59.741 8.249 - 8.296: 98.8333% ( 1) 00:18:59.741 8.439 - 8.486: 98.8410% ( 1) 00:18:59.741 8.486 - 8.533: 98.8486% ( 1) 00:18:59.741 8.628 - 8.676: 9[2024-11-19 16:24:49.697813] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:59.741 8.8563% ( 1) 00:18:59.741 8.676 - 8.723: 98.8640% ( 1) 00:18:59.741 14.981 - 15.076: 98.8717% ( 1) 00:18:59.741 15.455 - 15.550: 98.8793% ( 1) 00:18:59.741 15.550 - 15.644: 98.8870% ( 1) 00:18:59.741 15.644 - 15.739: 98.8947% ( 1) 00:18:59.741 15.739 - 15.834: 98.9100% ( 2) 00:18:59.741 15.834 - 15.929: 98.9331% ( 3) 00:18:59.741 15.929 - 16.024: 98.9561% ( 3) 00:18:59.741 16.024 - 16.119: 99.0098% ( 7) 00:18:59.741 16.119 - 16.213: 99.0175% ( 1) 00:18:59.741 16.213 - 16.308: 99.0329% ( 2) 00:18:59.741 16.308 - 16.403: 99.0712% ( 5) 00:18:59.741 16.498 - 16.593: 99.0943% ( 3) 00:18:59.741 16.593 - 16.687: 99.1250% ( 4) 00:18:59.741 16.687 - 16.782: 99.1480% ( 3) 00:18:59.741 16.782 - 16.877: 99.1633% ( 2) 00:18:59.741 16.877 - 16.972: 99.1864% ( 3) 00:18:59.741 16.972 - 17.067: 99.2017% ( 2) 00:18:59.741 17.067 - 17.161: 99.2171% ( 2) 00:18:59.741 17.161 - 17.256: 99.2247% ( 1) 00:18:59.741 17.351 - 17.446: 99.2401% ( 2) 00:18:59.741 17.446 - 17.541: 99.2478% ( 1) 00:18:59.741 17.541 - 17.636: 99.2554% ( 1) 00:18:59.741 17.636 - 17.730: 99.2708% ( 2) 00:18:59.741 17.825 - 17.920: 99.2938% ( 3) 00:18:59.741 17.920 - 18.015: 99.3092% ( 2) 00:18:59.741 18.015 - 18.110: 99.3322% ( 3) 00:18:59.741 18.110 - 18.204: 99.3476% ( 2) 00:18:59.741 18.204 - 18.299: 99.3629% ( 2) 00:18:59.741 18.299 - 18.394: 99.3783% ( 2) 00:18:59.741 18.394 - 18.489: 99.3859% ( 1) 00:18:59.741 18.868 - 18.963: 99.3936% ( 1) 00:18:59.741 3034.074 - 3046.210: 99.4013% ( 1) 00:18:59.741 3980.705 - 4004.978: 99.7006% ( 39) 00:18:59.741 4004.978 - 4029.250: 99.9770% ( 36) 00:18:59.741 4975.881 - 5000.154: 99.9846% ( 1) 00:18:59.741 5000.154 - 5024.427: 99.9923% ( 1) 00:18:59.741 6990.507 - 7039.052: 100.0000% ( 1) 00:18:59.741 00:18:59.741 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:59.741 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:59.741 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:59.741 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:59.741 16:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:59.741 [ 00:18:59.741 { 00:18:59.741 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:59.741 "subtype": "Discovery", 00:18:59.741 "listen_addresses": [], 00:18:59.741 "allow_any_host": true, 00:18:59.741 "hosts": [] 00:18:59.741 }, 00:18:59.741 { 00:18:59.741 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:59.741 "subtype": "NVMe", 00:18:59.741 "listen_addresses": [ 00:18:59.741 { 00:18:59.741 "trtype": "VFIOUSER", 00:18:59.741 "adrfam": "IPv4", 00:18:59.741 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:59.741 "trsvcid": "0" 00:18:59.741 } 00:18:59.741 ], 00:18:59.741 "allow_any_host": true, 00:18:59.741 "hosts": [], 00:18:59.741 "serial_number": "SPDK1", 00:18:59.742 "model_number": "SPDK bdev Controller", 00:18:59.742 "max_namespaces": 32, 00:18:59.742 "min_cntlid": 1, 00:18:59.742 "max_cntlid": 65519, 00:18:59.742 "namespaces": [ 00:18:59.742 { 00:18:59.742 "nsid": 1, 00:18:59.742 "bdev_name": "Malloc1", 00:18:59.742 "name": "Malloc1", 00:18:59.742 "nguid": "973799FCD21242BD86CFDC25CCD52000", 00:18:59.742 "uuid": "973799fc-d212-42bd-86cf-dc25ccd52000" 00:18:59.742 } 00:18:59.742 ] 00:18:59.742 }, 00:18:59.742 { 00:18:59.742 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:59.742 "subtype": "NVMe", 00:18:59.742 "listen_addresses": [ 00:18:59.742 { 00:18:59.742 "trtype": "VFIOUSER", 00:18:59.742 "adrfam": "IPv4", 00:18:59.742 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:59.742 "trsvcid": "0" 00:18:59.742 } 00:18:59.742 ], 00:18:59.742 "allow_any_host": true, 00:18:59.742 "hosts": [], 00:18:59.742 "serial_number": "SPDK2", 00:18:59.742 "model_number": "SPDK bdev Controller", 00:18:59.742 "max_namespaces": 32, 00:18:59.742 "min_cntlid": 1, 00:18:59.742 "max_cntlid": 65519, 00:18:59.742 "namespaces": [ 00:18:59.742 { 00:18:59.742 "nsid": 1, 00:18:59.742 "bdev_name": "Malloc2", 00:18:59.742 "name": "Malloc2", 00:18:59.742 "nguid": "CE94B18FC46144D2927CB77881B259F3", 00:18:59.742 "uuid": "ce94b18f-c461-44d2-927c-b77881b259f3" 00:18:59.742 } 00:18:59.742 ] 00:18:59.742 } 00:18:59.742 ] 00:18:59.742 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:59.742 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=231254 00:18:59.742 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:59.742 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:59.742 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:59.742 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:59.742 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:59.742 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:59.742 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:00.001 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:00.001 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:00.001 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:19:00.001 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:00.001 [2024-11-19 16:24:50.203642] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:19:00.001 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:00.001 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:00.001 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:00.001 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:00.001 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:19:00.261 Malloc3 00:19:00.261 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:19:00.829 [2024-11-19 16:24:50.867769] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:19:00.829 16:24:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:00.829 Asynchronous Event Request test 00:19:00.829 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:19:00.829 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:19:00.829 Registering asynchronous event callbacks... 00:19:00.829 Starting namespace attribute notice tests for all controllers... 00:19:00.829 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:00.829 aer_cb - Changed Namespace 00:19:00.829 Cleaning up... 00:19:00.829 [ 00:19:00.829 { 00:19:00.829 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:00.829 "subtype": "Discovery", 00:19:00.829 "listen_addresses": [], 00:19:00.829 "allow_any_host": true, 00:19:00.829 "hosts": [] 00:19:00.829 }, 00:19:00.829 { 00:19:00.829 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:00.829 "subtype": "NVMe", 00:19:00.829 "listen_addresses": [ 00:19:00.829 { 00:19:00.829 "trtype": "VFIOUSER", 00:19:00.829 "adrfam": "IPv4", 00:19:00.829 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:00.829 "trsvcid": "0" 00:19:00.829 } 00:19:00.829 ], 00:19:00.829 "allow_any_host": true, 00:19:00.829 "hosts": [], 00:19:00.829 "serial_number": "SPDK1", 00:19:00.829 "model_number": "SPDK bdev Controller", 00:19:00.829 "max_namespaces": 32, 00:19:00.829 "min_cntlid": 1, 00:19:00.829 "max_cntlid": 65519, 00:19:00.829 "namespaces": [ 00:19:00.829 { 00:19:00.829 "nsid": 1, 00:19:00.829 "bdev_name": "Malloc1", 00:19:00.829 "name": "Malloc1", 00:19:00.829 "nguid": "973799FCD21242BD86CFDC25CCD52000", 00:19:00.829 "uuid": "973799fc-d212-42bd-86cf-dc25ccd52000" 00:19:00.829 }, 00:19:00.829 { 00:19:00.829 "nsid": 2, 00:19:00.829 "bdev_name": "Malloc3", 00:19:00.829 "name": "Malloc3", 00:19:00.829 "nguid": "7D4441EF17AC41B092182019A35A89D4", 00:19:00.829 "uuid": "7d4441ef-17ac-41b0-9218-2019a35a89d4" 00:19:00.829 } 00:19:00.829 ] 00:19:00.829 }, 00:19:00.829 { 00:19:00.829 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:00.829 "subtype": "NVMe", 00:19:00.829 "listen_addresses": [ 00:19:00.829 { 00:19:00.829 "trtype": "VFIOUSER", 00:19:00.829 "adrfam": "IPv4", 00:19:00.829 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:00.829 "trsvcid": "0" 00:19:00.829 } 00:19:00.829 ], 00:19:00.829 "allow_any_host": true, 00:19:00.829 "hosts": [], 00:19:00.829 "serial_number": "SPDK2", 00:19:00.829 "model_number": "SPDK bdev Controller", 00:19:00.829 "max_namespaces": 32, 00:19:00.829 "min_cntlid": 1, 00:19:00.829 "max_cntlid": 65519, 00:19:00.829 "namespaces": [ 00:19:00.829 { 00:19:00.829 "nsid": 1, 00:19:00.829 "bdev_name": "Malloc2", 00:19:00.829 "name": "Malloc2", 00:19:00.829 "nguid": "CE94B18FC46144D2927CB77881B259F3", 00:19:00.829 "uuid": "ce94b18f-c461-44d2-927c-b77881b259f3" 00:19:00.829 } 00:19:00.829 ] 00:19:00.829 } 00:19:00.829 ] 00:19:00.829 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 231254 00:19:00.829 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:00.829 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:00.829 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:19:00.829 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:19:01.091 [2024-11-19 16:24:51.172593] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:19:01.091 [2024-11-19 16:24:51.172637] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid231393 ] 00:19:01.091 [2024-11-19 16:24:51.220421] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:19:01.091 [2024-11-19 16:24:51.225741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:01.091 [2024-11-19 16:24:51.225770] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5815d6f000 00:19:01.091 [2024-11-19 16:24:51.226728] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:01.091 [2024-11-19 16:24:51.227732] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:01.091 [2024-11-19 16:24:51.228737] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:01.091 [2024-11-19 16:24:51.229747] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:01.091 [2024-11-19 16:24:51.230751] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:01.091 [2024-11-19 16:24:51.231760] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:01.091 [2024-11-19 16:24:51.232762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:19:01.091 [2024-11-19 16:24:51.235078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:19:01.091 [2024-11-19 16:24:51.235779] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:19:01.091 [2024-11-19 16:24:51.235799] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f5814a67000 00:19:01.091 [2024-11-19 16:24:51.236969] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:01.091 [2024-11-19 16:24:51.251890] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:19:01.091 [2024-11-19 16:24:51.251928] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:19:01.091 [2024-11-19 16:24:51.257063] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:01.091 [2024-11-19 16:24:51.257125] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:19:01.091 [2024-11-19 16:24:51.257212] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:19:01.091 [2024-11-19 16:24:51.257235] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:19:01.091 [2024-11-19 16:24:51.257250] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:19:01.091 [2024-11-19 16:24:51.258056] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:19:01.091 [2024-11-19 16:24:51.258099] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:19:01.091 [2024-11-19 16:24:51.258114] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:19:01.091 [2024-11-19 16:24:51.259076] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:19:01.091 [2024-11-19 16:24:51.259097] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:19:01.091 [2024-11-19 16:24:51.259110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:19:01.091 [2024-11-19 16:24:51.260091] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:19:01.091 [2024-11-19 16:24:51.260111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:19:01.091 [2024-11-19 16:24:51.261093] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:19:01.091 [2024-11-19 16:24:51.261113] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:19:01.091 [2024-11-19 16:24:51.261122] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:19:01.091 [2024-11-19 16:24:51.261133] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:19:01.091 [2024-11-19 16:24:51.261242] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:19:01.091 [2024-11-19 16:24:51.261250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:19:01.091 [2024-11-19 16:24:51.261259] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:19:01.091 [2024-11-19 16:24:51.262106] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:19:01.091 [2024-11-19 16:24:51.263107] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:19:01.091 [2024-11-19 16:24:51.264121] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:01.091 [2024-11-19 16:24:51.265113] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:01.091 [2024-11-19 16:24:51.265193] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:19:01.091 [2024-11-19 16:24:51.266131] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:19:01.091 [2024-11-19 16:24:51.266151] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:19:01.091 [2024-11-19 16:24:51.266160] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:19:01.091 [2024-11-19 16:24:51.266189] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:19:01.091 [2024-11-19 16:24:51.266208] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:19:01.091 [2024-11-19 16:24:51.266228] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:01.091 [2024-11-19 16:24:51.266237] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:01.091 [2024-11-19 16:24:51.266244] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.091 [2024-11-19 16:24:51.266261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:01.091 [2024-11-19 16:24:51.275086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:19:01.091 [2024-11-19 16:24:51.275109] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:19:01.091 [2024-11-19 16:24:51.275118] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:19:01.091 [2024-11-19 16:24:51.275125] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:19:01.091 [2024-11-19 16:24:51.275133] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:19:01.091 [2024-11-19 16:24:51.275145] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:19:01.091 [2024-11-19 16:24:51.275154] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:19:01.091 [2024-11-19 16:24:51.275161] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:19:01.091 [2024-11-19 16:24:51.275177] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:19:01.091 [2024-11-19 16:24:51.275193] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:19:01.091 [2024-11-19 16:24:51.283084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.283108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.092 [2024-11-19 16:24:51.283121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.092 [2024-11-19 16:24:51.283133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.092 [2024-11-19 16:24:51.283145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:19:01.092 [2024-11-19 16:24:51.283154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.283166] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.283179] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.291083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.291109] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:19:01.092 [2024-11-19 16:24:51.291120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.291131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.291140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.291154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.299081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.299158] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.299174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.299188] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:19:01.092 [2024-11-19 16:24:51.299196] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:19:01.092 [2024-11-19 16:24:51.299202] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.092 [2024-11-19 16:24:51.299211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.307080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.307103] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:19:01.092 [2024-11-19 16:24:51.307123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.307138] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.307151] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:01.092 [2024-11-19 16:24:51.307159] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:01.092 [2024-11-19 16:24:51.307165] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.092 [2024-11-19 16:24:51.307174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.315083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.315110] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.315127] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.315139] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:19:01.092 [2024-11-19 16:24:51.315147] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:01.092 [2024-11-19 16:24:51.315153] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.092 [2024-11-19 16:24:51.315163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.323082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.323103] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.323116] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.323130] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.323140] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.323148] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.323156] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.323165] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:19:01.092 [2024-11-19 16:24:51.323173] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:19:01.092 [2024-11-19 16:24:51.323181] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:19:01.092 [2024-11-19 16:24:51.323205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.331082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.331108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.339092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.339118] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.347080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.347106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.355081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.355113] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:19:01.092 [2024-11-19 16:24:51.355124] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:19:01.092 [2024-11-19 16:24:51.355130] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:19:01.092 [2024-11-19 16:24:51.355136] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:19:01.092 [2024-11-19 16:24:51.355141] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:19:01.092 [2024-11-19 16:24:51.355151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:19:01.092 [2024-11-19 16:24:51.355162] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:19:01.092 [2024-11-19 16:24:51.355174] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:19:01.092 [2024-11-19 16:24:51.355181] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.092 [2024-11-19 16:24:51.355190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.355201] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:19:01.092 [2024-11-19 16:24:51.355209] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:19:01.092 [2024-11-19 16:24:51.355214] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.092 [2024-11-19 16:24:51.355223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.355234] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:19:01.092 [2024-11-19 16:24:51.355242] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:19:01.092 [2024-11-19 16:24:51.355248] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:19:01.092 [2024-11-19 16:24:51.355256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:19:01.092 [2024-11-19 16:24:51.363081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.363109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.363126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:19:01.092 [2024-11-19 16:24:51.363139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:19:01.092 ===================================================== 00:19:01.092 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:01.092 ===================================================== 00:19:01.092 Controller Capabilities/Features 00:19:01.092 ================================ 00:19:01.092 Vendor ID: 4e58 00:19:01.092 Subsystem Vendor ID: 4e58 00:19:01.092 Serial Number: SPDK2 00:19:01.092 Model Number: SPDK bdev Controller 00:19:01.092 Firmware Version: 25.01 00:19:01.092 Recommended Arb Burst: 6 00:19:01.092 IEEE OUI Identifier: 8d 6b 50 00:19:01.092 Multi-path I/O 00:19:01.092 May have multiple subsystem ports: Yes 00:19:01.092 May have multiple controllers: Yes 00:19:01.092 Associated with SR-IOV VF: No 00:19:01.092 Max Data Transfer Size: 131072 00:19:01.093 Max Number of Namespaces: 32 00:19:01.093 Max Number of I/O Queues: 127 00:19:01.093 NVMe Specification Version (VS): 1.3 00:19:01.093 NVMe Specification Version (Identify): 1.3 00:19:01.093 Maximum Queue Entries: 256 00:19:01.093 Contiguous Queues Required: Yes 00:19:01.093 Arbitration Mechanisms Supported 00:19:01.093 Weighted Round Robin: Not Supported 00:19:01.093 Vendor Specific: Not Supported 00:19:01.093 Reset Timeout: 15000 ms 00:19:01.093 Doorbell Stride: 4 bytes 00:19:01.093 NVM Subsystem Reset: Not Supported 00:19:01.093 Command Sets Supported 00:19:01.093 NVM Command Set: Supported 00:19:01.093 Boot Partition: Not Supported 00:19:01.093 Memory Page Size Minimum: 4096 bytes 00:19:01.093 Memory Page Size Maximum: 4096 bytes 00:19:01.093 Persistent Memory Region: Not Supported 00:19:01.093 Optional Asynchronous Events Supported 00:19:01.093 Namespace Attribute Notices: Supported 00:19:01.093 Firmware Activation Notices: Not Supported 00:19:01.093 ANA Change Notices: Not Supported 00:19:01.093 PLE Aggregate Log Change Notices: Not Supported 00:19:01.093 LBA Status Info Alert Notices: Not Supported 00:19:01.093 EGE Aggregate Log Change Notices: Not Supported 00:19:01.093 Normal NVM Subsystem Shutdown event: Not Supported 00:19:01.093 Zone Descriptor Change Notices: Not Supported 00:19:01.093 Discovery Log Change Notices: Not Supported 00:19:01.093 Controller Attributes 00:19:01.093 128-bit Host Identifier: Supported 00:19:01.093 Non-Operational Permissive Mode: Not Supported 00:19:01.093 NVM Sets: Not Supported 00:19:01.093 Read Recovery Levels: Not Supported 00:19:01.093 Endurance Groups: Not Supported 00:19:01.093 Predictable Latency Mode: Not Supported 00:19:01.093 Traffic Based Keep ALive: Not Supported 00:19:01.093 Namespace Granularity: Not Supported 00:19:01.093 SQ Associations: Not Supported 00:19:01.093 UUID List: Not Supported 00:19:01.093 Multi-Domain Subsystem: Not Supported 00:19:01.093 Fixed Capacity Management: Not Supported 00:19:01.093 Variable Capacity Management: Not Supported 00:19:01.093 Delete Endurance Group: Not Supported 00:19:01.093 Delete NVM Set: Not Supported 00:19:01.093 Extended LBA Formats Supported: Not Supported 00:19:01.093 Flexible Data Placement Supported: Not Supported 00:19:01.093 00:19:01.093 Controller Memory Buffer Support 00:19:01.093 ================================ 00:19:01.093 Supported: No 00:19:01.093 00:19:01.093 Persistent Memory Region Support 00:19:01.093 ================================ 00:19:01.093 Supported: No 00:19:01.093 00:19:01.093 Admin Command Set Attributes 00:19:01.093 ============================ 00:19:01.093 Security Send/Receive: Not Supported 00:19:01.093 Format NVM: Not Supported 00:19:01.093 Firmware Activate/Download: Not Supported 00:19:01.093 Namespace Management: Not Supported 00:19:01.093 Device Self-Test: Not Supported 00:19:01.093 Directives: Not Supported 00:19:01.093 NVMe-MI: Not Supported 00:19:01.093 Virtualization Management: Not Supported 00:19:01.093 Doorbell Buffer Config: Not Supported 00:19:01.093 Get LBA Status Capability: Not Supported 00:19:01.093 Command & Feature Lockdown Capability: Not Supported 00:19:01.093 Abort Command Limit: 4 00:19:01.093 Async Event Request Limit: 4 00:19:01.093 Number of Firmware Slots: N/A 00:19:01.093 Firmware Slot 1 Read-Only: N/A 00:19:01.093 Firmware Activation Without Reset: N/A 00:19:01.093 Multiple Update Detection Support: N/A 00:19:01.093 Firmware Update Granularity: No Information Provided 00:19:01.093 Per-Namespace SMART Log: No 00:19:01.093 Asymmetric Namespace Access Log Page: Not Supported 00:19:01.093 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:19:01.093 Command Effects Log Page: Supported 00:19:01.093 Get Log Page Extended Data: Supported 00:19:01.093 Telemetry Log Pages: Not Supported 00:19:01.093 Persistent Event Log Pages: Not Supported 00:19:01.093 Supported Log Pages Log Page: May Support 00:19:01.093 Commands Supported & Effects Log Page: Not Supported 00:19:01.093 Feature Identifiers & Effects Log Page:May Support 00:19:01.093 NVMe-MI Commands & Effects Log Page: May Support 00:19:01.093 Data Area 4 for Telemetry Log: Not Supported 00:19:01.093 Error Log Page Entries Supported: 128 00:19:01.093 Keep Alive: Supported 00:19:01.093 Keep Alive Granularity: 10000 ms 00:19:01.093 00:19:01.093 NVM Command Set Attributes 00:19:01.093 ========================== 00:19:01.093 Submission Queue Entry Size 00:19:01.093 Max: 64 00:19:01.093 Min: 64 00:19:01.093 Completion Queue Entry Size 00:19:01.093 Max: 16 00:19:01.093 Min: 16 00:19:01.093 Number of Namespaces: 32 00:19:01.093 Compare Command: Supported 00:19:01.093 Write Uncorrectable Command: Not Supported 00:19:01.093 Dataset Management Command: Supported 00:19:01.093 Write Zeroes Command: Supported 00:19:01.093 Set Features Save Field: Not Supported 00:19:01.093 Reservations: Not Supported 00:19:01.093 Timestamp: Not Supported 00:19:01.093 Copy: Supported 00:19:01.093 Volatile Write Cache: Present 00:19:01.093 Atomic Write Unit (Normal): 1 00:19:01.093 Atomic Write Unit (PFail): 1 00:19:01.093 Atomic Compare & Write Unit: 1 00:19:01.093 Fused Compare & Write: Supported 00:19:01.093 Scatter-Gather List 00:19:01.093 SGL Command Set: Supported (Dword aligned) 00:19:01.093 SGL Keyed: Not Supported 00:19:01.093 SGL Bit Bucket Descriptor: Not Supported 00:19:01.093 SGL Metadata Pointer: Not Supported 00:19:01.093 Oversized SGL: Not Supported 00:19:01.093 SGL Metadata Address: Not Supported 00:19:01.093 SGL Offset: Not Supported 00:19:01.093 Transport SGL Data Block: Not Supported 00:19:01.093 Replay Protected Memory Block: Not Supported 00:19:01.093 00:19:01.093 Firmware Slot Information 00:19:01.093 ========================= 00:19:01.093 Active slot: 1 00:19:01.093 Slot 1 Firmware Revision: 25.01 00:19:01.093 00:19:01.093 00:19:01.093 Commands Supported and Effects 00:19:01.093 ============================== 00:19:01.093 Admin Commands 00:19:01.093 -------------- 00:19:01.093 Get Log Page (02h): Supported 00:19:01.093 Identify (06h): Supported 00:19:01.093 Abort (08h): Supported 00:19:01.093 Set Features (09h): Supported 00:19:01.093 Get Features (0Ah): Supported 00:19:01.093 Asynchronous Event Request (0Ch): Supported 00:19:01.093 Keep Alive (18h): Supported 00:19:01.093 I/O Commands 00:19:01.093 ------------ 00:19:01.093 Flush (00h): Supported LBA-Change 00:19:01.093 Write (01h): Supported LBA-Change 00:19:01.093 Read (02h): Supported 00:19:01.093 Compare (05h): Supported 00:19:01.093 Write Zeroes (08h): Supported LBA-Change 00:19:01.093 Dataset Management (09h): Supported LBA-Change 00:19:01.093 Copy (19h): Supported LBA-Change 00:19:01.093 00:19:01.093 Error Log 00:19:01.093 ========= 00:19:01.093 00:19:01.093 Arbitration 00:19:01.093 =========== 00:19:01.093 Arbitration Burst: 1 00:19:01.093 00:19:01.093 Power Management 00:19:01.093 ================ 00:19:01.093 Number of Power States: 1 00:19:01.093 Current Power State: Power State #0 00:19:01.093 Power State #0: 00:19:01.093 Max Power: 0.00 W 00:19:01.093 Non-Operational State: Operational 00:19:01.093 Entry Latency: Not Reported 00:19:01.093 Exit Latency: Not Reported 00:19:01.093 Relative Read Throughput: 0 00:19:01.093 Relative Read Latency: 0 00:19:01.093 Relative Write Throughput: 0 00:19:01.093 Relative Write Latency: 0 00:19:01.093 Idle Power: Not Reported 00:19:01.093 Active Power: Not Reported 00:19:01.093 Non-Operational Permissive Mode: Not Supported 00:19:01.093 00:19:01.093 Health Information 00:19:01.093 ================== 00:19:01.093 Critical Warnings: 00:19:01.093 Available Spare Space: OK 00:19:01.093 Temperature: OK 00:19:01.093 Device Reliability: OK 00:19:01.093 Read Only: No 00:19:01.093 Volatile Memory Backup: OK 00:19:01.093 Current Temperature: 0 Kelvin (-273 Celsius) 00:19:01.093 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:19:01.093 Available Spare: 0% 00:19:01.093 Available Sp[2024-11-19 16:24:51.363257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:19:01.093 [2024-11-19 16:24:51.371084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:19:01.093 [2024-11-19 16:24:51.371132] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:19:01.093 [2024-11-19 16:24:51.371150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.093 [2024-11-19 16:24:51.371161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.094 [2024-11-19 16:24:51.371170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.094 [2024-11-19 16:24:51.371179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:01.094 [2024-11-19 16:24:51.371244] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:19:01.094 [2024-11-19 16:24:51.371264] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:19:01.094 [2024-11-19 16:24:51.372254] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:01.094 [2024-11-19 16:24:51.372341] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:19:01.094 [2024-11-19 16:24:51.372356] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:19:01.094 [2024-11-19 16:24:51.373282] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:19:01.094 [2024-11-19 16:24:51.373311] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:19:01.094 [2024-11-19 16:24:51.373378] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:19:01.094 [2024-11-19 16:24:51.374573] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:19:01.094 are Threshold: 0% 00:19:01.094 Life Percentage Used: 0% 00:19:01.094 Data Units Read: 0 00:19:01.094 Data Units Written: 0 00:19:01.094 Host Read Commands: 0 00:19:01.094 Host Write Commands: 0 00:19:01.094 Controller Busy Time: 0 minutes 00:19:01.094 Power Cycles: 0 00:19:01.094 Power On Hours: 0 hours 00:19:01.094 Unsafe Shutdowns: 0 00:19:01.094 Unrecoverable Media Errors: 0 00:19:01.094 Lifetime Error Log Entries: 0 00:19:01.094 Warning Temperature Time: 0 minutes 00:19:01.094 Critical Temperature Time: 0 minutes 00:19:01.094 00:19:01.094 Number of Queues 00:19:01.094 ================ 00:19:01.094 Number of I/O Submission Queues: 127 00:19:01.094 Number of I/O Completion Queues: 127 00:19:01.094 00:19:01.094 Active Namespaces 00:19:01.094 ================= 00:19:01.094 Namespace ID:1 00:19:01.094 Error Recovery Timeout: Unlimited 00:19:01.094 Command Set Identifier: NVM (00h) 00:19:01.094 Deallocate: Supported 00:19:01.094 Deallocated/Unwritten Error: Not Supported 00:19:01.094 Deallocated Read Value: Unknown 00:19:01.094 Deallocate in Write Zeroes: Not Supported 00:19:01.094 Deallocated Guard Field: 0xFFFF 00:19:01.094 Flush: Supported 00:19:01.094 Reservation: Supported 00:19:01.094 Namespace Sharing Capabilities: Multiple Controllers 00:19:01.094 Size (in LBAs): 131072 (0GiB) 00:19:01.094 Capacity (in LBAs): 131072 (0GiB) 00:19:01.094 Utilization (in LBAs): 131072 (0GiB) 00:19:01.094 NGUID: CE94B18FC46144D2927CB77881B259F3 00:19:01.094 UUID: ce94b18f-c461-44d2-927c-b77881b259f3 00:19:01.094 Thin Provisioning: Not Supported 00:19:01.094 Per-NS Atomic Units: Yes 00:19:01.094 Atomic Boundary Size (Normal): 0 00:19:01.094 Atomic Boundary Size (PFail): 0 00:19:01.094 Atomic Boundary Offset: 0 00:19:01.094 Maximum Single Source Range Length: 65535 00:19:01.094 Maximum Copy Length: 65535 00:19:01.094 Maximum Source Range Count: 1 00:19:01.094 NGUID/EUI64 Never Reused: No 00:19:01.094 Namespace Write Protected: No 00:19:01.094 Number of LBA Formats: 1 00:19:01.094 Current LBA Format: LBA Format #00 00:19:01.094 LBA Format #00: Data Size: 512 Metadata Size: 0 00:19:01.094 00:19:01.094 16:24:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:19:01.356 [2024-11-19 16:24:51.615006] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:06.638 Initializing NVMe Controllers 00:19:06.638 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:06.638 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:06.638 Initialization complete. Launching workers. 00:19:06.638 ======================================================== 00:19:06.638 Latency(us) 00:19:06.638 Device Information : IOPS MiB/s Average min max 00:19:06.638 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33435.72 130.61 3827.50 1189.80 7954.62 00:19:06.638 ======================================================== 00:19:06.638 Total : 33435.72 130.61 3827.50 1189.80 7954.62 00:19:06.638 00:19:06.638 [2024-11-19 16:24:56.719420] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:06.638 16:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:06.638 [2024-11-19 16:24:56.968118] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:11.916 Initializing NVMe Controllers 00:19:11.916 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:11.916 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:11.916 Initialization complete. Launching workers. 00:19:11.916 ======================================================== 00:19:11.916 Latency(us) 00:19:11.916 Device Information : IOPS MiB/s Average min max 00:19:11.916 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30775.03 120.21 4158.32 1221.47 7918.94 00:19:11.916 ======================================================== 00:19:11.916 Total : 30775.03 120.21 4158.32 1221.47 7918.94 00:19:11.916 00:19:11.916 [2024-11-19 16:25:01.989392] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:11.916 16:25:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:11.916 [2024-11-19 16:25:02.226355] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:17.216 [2024-11-19 16:25:07.360224] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:17.216 Initializing NVMe Controllers 00:19:17.216 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:17.216 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:17.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:17.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:17.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:17.216 Initialization complete. Launching workers. 00:19:17.216 Starting thread on core 2 00:19:17.216 Starting thread on core 3 00:19:17.216 Starting thread on core 1 00:19:17.216 16:25:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:17.475 [2024-11-19 16:25:07.678896] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:20.766 [2024-11-19 16:25:10.739357] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:20.766 Initializing NVMe Controllers 00:19:20.766 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:20.766 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:20.766 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:20.766 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:20.766 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:20.766 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:20.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:20.766 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:20.766 Initialization complete. Launching workers. 00:19:20.766 Starting thread on core 1 with urgent priority queue 00:19:20.766 Starting thread on core 2 with urgent priority queue 00:19:20.766 Starting thread on core 3 with urgent priority queue 00:19:20.766 Starting thread on core 0 with urgent priority queue 00:19:20.766 SPDK bdev Controller (SPDK2 ) core 0: 6615.33 IO/s 15.12 secs/100000 ios 00:19:20.766 SPDK bdev Controller (SPDK2 ) core 1: 5899.33 IO/s 16.95 secs/100000 ios 00:19:20.766 SPDK bdev Controller (SPDK2 ) core 2: 6302.67 IO/s 15.87 secs/100000 ios 00:19:20.766 SPDK bdev Controller (SPDK2 ) core 3: 6280.67 IO/s 15.92 secs/100000 ios 00:19:20.766 ======================================================== 00:19:20.766 00:19:20.766 16:25:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:20.766 [2024-11-19 16:25:11.054719] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:20.766 Initializing NVMe Controllers 00:19:20.766 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:20.766 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:20.766 Namespace ID: 1 size: 0GB 00:19:20.766 Initialization complete. 00:19:20.766 INFO: using host memory buffer for IO 00:19:20.766 Hello world! 00:19:20.766 [2024-11-19 16:25:11.066795] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:21.025 16:25:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:21.285 [2024-11-19 16:25:11.367821] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:22.228 Initializing NVMe Controllers 00:19:22.228 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:22.228 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:22.228 Initialization complete. Launching workers. 00:19:22.228 submit (in ns) avg, min, max = 6079.6, 3493.3, 4024421.1 00:19:22.228 complete (in ns) avg, min, max = 27223.3, 2050.0, 4017703.3 00:19:22.228 00:19:22.228 Submit histogram 00:19:22.228 ================ 00:19:22.228 Range in us Cumulative Count 00:19:22.228 3.484 - 3.508: 0.0772% ( 10) 00:19:22.228 3.508 - 3.532: 0.4171% ( 44) 00:19:22.228 3.532 - 3.556: 1.8536% ( 186) 00:19:22.228 3.556 - 3.579: 5.2749% ( 443) 00:19:22.228 3.579 - 3.603: 11.2913% ( 779) 00:19:22.228 3.603 - 3.627: 19.6555% ( 1083) 00:19:22.228 3.627 - 3.650: 30.6534% ( 1424) 00:19:22.228 3.650 - 3.674: 40.7013% ( 1301) 00:19:22.228 3.674 - 3.698: 48.5635% ( 1018) 00:19:22.228 3.698 - 3.721: 55.1745% ( 856) 00:19:22.228 3.721 - 3.745: 59.5845% ( 571) 00:19:22.228 3.745 - 3.769: 64.7590% ( 670) 00:19:22.228 3.769 - 3.793: 68.3735% ( 468) 00:19:22.228 3.793 - 3.816: 72.1579% ( 490) 00:19:22.228 3.816 - 3.840: 75.0463% ( 374) 00:19:22.228 3.840 - 3.864: 78.5218% ( 450) 00:19:22.228 3.864 - 3.887: 81.8736% ( 434) 00:19:22.228 3.887 - 3.911: 84.9166% ( 394) 00:19:22.228 3.911 - 3.935: 87.3262% ( 312) 00:19:22.228 3.935 - 3.959: 89.0794% ( 227) 00:19:22.228 3.959 - 3.982: 90.7708% ( 219) 00:19:22.228 3.982 - 4.006: 92.2150% ( 187) 00:19:22.228 4.006 - 4.030: 93.3658% ( 149) 00:19:22.228 4.030 - 4.053: 94.3157% ( 123) 00:19:22.228 4.053 - 4.077: 94.9027% ( 76) 00:19:22.228 4.077 - 4.101: 95.4047% ( 65) 00:19:22.228 4.101 - 4.124: 95.8990% ( 64) 00:19:22.228 4.124 - 4.148: 96.2156% ( 41) 00:19:22.228 4.148 - 4.172: 96.5323% ( 41) 00:19:22.228 4.172 - 4.196: 96.6481% ( 15) 00:19:22.228 4.196 - 4.219: 96.8180% ( 22) 00:19:22.228 4.219 - 4.243: 96.9339% ( 15) 00:19:22.228 4.243 - 4.267: 97.0111% ( 10) 00:19:22.228 4.267 - 4.290: 97.0729% ( 8) 00:19:22.228 4.290 - 4.314: 97.2737% ( 26) 00:19:22.228 4.314 - 4.338: 97.3355% ( 8) 00:19:22.228 4.338 - 4.361: 97.3973% ( 8) 00:19:22.228 4.361 - 4.385: 97.4591% ( 8) 00:19:22.228 4.385 - 4.409: 97.5209% ( 8) 00:19:22.228 4.409 - 4.433: 97.5749% ( 7) 00:19:22.228 4.433 - 4.456: 97.5904% ( 2) 00:19:22.228 4.456 - 4.480: 97.6058% ( 2) 00:19:22.228 4.480 - 4.504: 97.6135% ( 1) 00:19:22.228 4.504 - 4.527: 97.6290% ( 2) 00:19:22.228 4.575 - 4.599: 97.6367% ( 1) 00:19:22.228 4.599 - 4.622: 97.6599% ( 3) 00:19:22.228 4.622 - 4.646: 97.6676% ( 1) 00:19:22.228 4.646 - 4.670: 97.6908% ( 3) 00:19:22.228 4.670 - 4.693: 97.7294% ( 5) 00:19:22.228 4.693 - 4.717: 97.7525% ( 3) 00:19:22.228 4.717 - 4.741: 97.8298% ( 10) 00:19:22.228 4.741 - 4.764: 97.8916% ( 8) 00:19:22.228 4.764 - 4.788: 97.9688% ( 10) 00:19:22.228 4.788 - 4.812: 97.9842% ( 2) 00:19:22.228 4.812 - 4.836: 98.0306% ( 6) 00:19:22.228 4.836 - 4.859: 98.0846% ( 7) 00:19:22.228 4.859 - 4.883: 98.1310% ( 6) 00:19:22.228 4.883 - 4.907: 98.1773% ( 6) 00:19:22.228 4.907 - 4.930: 98.2391% ( 8) 00:19:22.228 4.930 - 4.954: 98.3009% ( 8) 00:19:22.228 4.954 - 4.978: 98.3395% ( 5) 00:19:22.228 4.978 - 5.001: 98.3781% ( 5) 00:19:22.228 5.001 - 5.025: 98.3936% ( 2) 00:19:22.228 5.025 - 5.049: 98.4399% ( 6) 00:19:22.228 5.073 - 5.096: 98.4554% ( 2) 00:19:22.228 5.096 - 5.120: 98.4785% ( 3) 00:19:22.228 5.120 - 5.144: 98.4940% ( 2) 00:19:22.228 5.144 - 5.167: 98.5017% ( 1) 00:19:22.228 5.167 - 5.191: 98.5171% ( 2) 00:19:22.228 5.191 - 5.215: 98.5249% ( 1) 00:19:22.228 5.215 - 5.239: 98.5326% ( 1) 00:19:22.228 5.381 - 5.404: 98.5403% ( 1) 00:19:22.228 5.736 - 5.760: 98.5480% ( 1) 00:19:22.228 5.902 - 5.926: 98.5635% ( 2) 00:19:22.228 5.926 - 5.950: 98.5712% ( 1) 00:19:22.228 6.305 - 6.353: 98.5789% ( 1) 00:19:22.228 6.447 - 6.495: 98.5867% ( 1) 00:19:22.228 6.590 - 6.637: 98.5944% ( 1) 00:19:22.228 6.779 - 6.827: 98.6021% ( 1) 00:19:22.228 7.348 - 7.396: 98.6098% ( 1) 00:19:22.228 7.396 - 7.443: 98.6175% ( 1) 00:19:22.228 7.443 - 7.490: 98.6253% ( 1) 00:19:22.228 7.585 - 7.633: 98.6330% ( 1) 00:19:22.228 7.680 - 7.727: 98.6407% ( 1) 00:19:22.228 7.822 - 7.870: 98.6562% ( 2) 00:19:22.228 7.870 - 7.917: 98.6639% ( 1) 00:19:22.228 7.917 - 7.964: 98.6716% ( 1) 00:19:22.228 8.154 - 8.201: 98.6871% ( 2) 00:19:22.228 8.249 - 8.296: 98.7025% ( 2) 00:19:22.228 8.344 - 8.391: 98.7102% ( 1) 00:19:22.228 8.391 - 8.439: 98.7179% ( 1) 00:19:22.228 8.581 - 8.628: 98.7257% ( 1) 00:19:22.228 8.770 - 8.818: 98.7334% ( 1) 00:19:22.228 8.865 - 8.913: 98.7411% ( 1) 00:19:22.228 8.913 - 8.960: 98.7488% ( 1) 00:19:22.228 8.960 - 9.007: 98.7566% ( 1) 00:19:22.228 9.055 - 9.102: 98.7643% ( 1) 00:19:22.228 9.150 - 9.197: 98.7720% ( 1) 00:19:22.228 9.197 - 9.244: 98.7797% ( 1) 00:19:22.228 9.339 - 9.387: 98.7875% ( 1) 00:19:22.228 9.481 - 9.529: 98.7952% ( 1) 00:19:22.228 9.766 - 9.813: 98.8106% ( 2) 00:19:22.228 9.813 - 9.861: 98.8184% ( 1) 00:19:22.228 9.861 - 9.908: 98.8338% ( 2) 00:19:22.228 9.956 - 10.003: 98.8415% ( 1) 00:19:22.228 10.003 - 10.050: 98.8492% ( 1) 00:19:22.228 10.193 - 10.240: 98.8570% ( 1) 00:19:22.228 10.240 - 10.287: 98.8647% ( 1) 00:19:22.228 10.287 - 10.335: 98.8724% ( 1) 00:19:22.228 10.524 - 10.572: 98.8801% ( 1) 00:19:22.228 10.667 - 10.714: 98.8879% ( 1) 00:19:22.228 10.714 - 10.761: 98.8956% ( 1) 00:19:22.228 10.904 - 10.951: 98.9033% ( 1) 00:19:22.228 10.951 - 10.999: 98.9110% ( 1) 00:19:22.228 11.093 - 11.141: 98.9188% ( 1) 00:19:22.228 11.141 - 11.188: 98.9265% ( 1) 00:19:22.228 11.188 - 11.236: 98.9342% ( 1) 00:19:22.228 11.425 - 11.473: 98.9496% ( 2) 00:19:22.228 11.567 - 11.615: 98.9574% ( 1) 00:19:22.228 11.710 - 11.757: 98.9651% ( 1) 00:19:22.228 11.852 - 11.899: 98.9728% ( 1) 00:19:22.228 12.231 - 12.326: 98.9883% ( 2) 00:19:22.228 12.326 - 12.421: 98.9960% ( 1) 00:19:22.228 12.516 - 12.610: 99.0037% ( 1) 00:19:22.228 12.895 - 12.990: 99.0114% ( 1) 00:19:22.228 13.274 - 13.369: 99.0192% ( 1) 00:19:22.228 13.653 - 13.748: 99.0269% ( 1) 00:19:22.228 13.843 - 13.938: 99.0423% ( 2) 00:19:22.228 14.127 - 14.222: 99.0500% ( 1) 00:19:22.228 14.507 - 14.601: 99.0578% ( 1) 00:19:22.228 15.170 - 15.265: 99.0655% ( 1) 00:19:22.228 16.877 - 16.972: 99.0732% ( 1) 00:19:22.228 17.067 - 17.161: 99.0809% ( 1) 00:19:22.229 17.161 - 17.256: 99.0887% ( 1) 00:19:22.229 17.256 - 17.351: 99.1041% ( 2) 00:19:22.229 17.351 - 17.446: 99.1196% ( 2) 00:19:22.229 17.446 - 17.541: 99.1582% ( 5) 00:19:22.229 17.541 - 17.636: 99.1891% ( 4) 00:19:22.229 17.636 - 17.730: 99.2122% ( 3) 00:19:22.229 17.730 - 17.825: 99.2663% ( 7) 00:19:22.229 17.825 - 17.920: 99.2972% ( 4) 00:19:22.229 17.920 - 18.015: 99.3358% ( 5) 00:19:22.229 18.015 - 18.110: 99.3899% ( 7) 00:19:22.229 18.110 - 18.204: 99.4594% ( 9) 00:19:22.229 18.204 - 18.299: 99.4980% ( 5) 00:19:22.229 18.299 - 18.394: 99.5829% ( 11) 00:19:22.229 18.394 - 18.489: 99.6525% ( 9) 00:19:22.229 18.489 - 18.584: 99.6988% ( 6) 00:19:22.229 18.584 - 18.679: 99.7606% ( 8) 00:19:22.229 18.679 - 18.773: 99.7760% ( 2) 00:19:22.229 18.773 - 18.868: 99.7992% ( 3) 00:19:22.229 18.868 - 18.963: 99.8224% ( 3) 00:19:22.229 18.963 - 19.058: 99.8301% ( 1) 00:19:22.229 19.058 - 19.153: 99.8378% ( 1) 00:19:22.229 19.153 - 19.247: 99.8533% ( 2) 00:19:22.229 19.247 - 19.342: 99.8687% ( 2) 00:19:22.229 19.342 - 19.437: 99.8764% ( 1) 00:19:22.229 20.196 - 20.290: 99.8842% ( 1) 00:19:22.229 23.799 - 23.893: 99.8919% ( 1) 00:19:22.229 24.083 - 24.178: 99.8996% ( 1) 00:19:22.229 25.221 - 25.410: 99.9150% ( 2) 00:19:22.229 25.979 - 26.169: 99.9228% ( 1) 00:19:22.229 26.548 - 26.738: 99.9305% ( 1) 00:19:22.229 28.824 - 29.013: 99.9382% ( 1) 00:19:22.229 29.203 - 29.393: 99.9459% ( 1) 00:19:22.229 3980.705 - 4004.978: 99.9846% ( 5) 00:19:22.229 4004.978 - 4029.250: 100.0000% ( 2) 00:19:22.229 00:19:22.229 Complete histogram 00:19:22.229 ================== 00:19:22.229 Range in us Cumulative Count 00:19:22.229 2.039 - 2.050: 0.0077% ( 1) 00:19:22.229 2.050 - 2.062: 11.9246% ( 1543) 00:19:22.229 2.062 - 2.074: 43.6361% ( 4106) 00:19:22.229 2.074 - 2.086: 46.6095% ( 385) 00:19:22.229 2.086 - 2.098: 54.5258% ( 1025) 00:19:22.229 2.098 - 2.110: 60.6040% ( 787) 00:19:22.229 2.110 - 2.121: 62.2104% ( 208) 00:19:22.229 2.121 - 2.133: 74.6293% ( 1608) 00:19:22.229 2.133 - 2.145: 82.1980% ( 980) 00:19:22.229 2.145 - 2.157: 83.2870% ( 141) 00:19:22.229 2.157 - 2.169: 87.2644% ( 515) 00:19:22.229 2.169 - 2.181: 88.6701% ( 182) 00:19:22.229 2.181 - 2.193: 89.3729% ( 91) 00:19:22.229 2.193 - 2.204: 90.8789% ( 195) 00:19:22.229 2.204 - 2.216: 92.3386% ( 189) 00:19:22.229 2.216 - 2.228: 94.1690% ( 237) 00:19:22.229 2.228 - 2.240: 94.9181% ( 97) 00:19:22.229 2.240 - 2.252: 95.0572% ( 18) 00:19:22.229 2.252 - 2.264: 95.1807% ( 16) 00:19:22.229 2.264 - 2.276: 95.2966% ( 15) 00:19:22.229 2.276 - 2.287: 95.4897% ( 25) 00:19:22.229 2.287 - 2.299: 95.8217% ( 43) 00:19:22.229 2.299 - 2.311: 95.9299% ( 14) 00:19:22.229 2.311 - 2.323: 95.9762% ( 6) 00:19:22.229 2.323 - 2.335: 95.9917% ( 2) 00:19:22.229 2.335 - 2.347: 96.0303% ( 5) 00:19:22.229 2.347 - 2.359: 96.0689% ( 5) 00:19:22.229 2.359 - 2.370: 96.2234% ( 20) 00:19:22.229 2.370 - 2.382: 96.3315% ( 14) 00:19:22.229 2.382 - 2.394: 96.5168% ( 24) 00:19:22.229 2.394 - 2.406: 96.6945% ( 23) 00:19:22.229 2.406 - 2.418: 96.9339% ( 31) 00:19:22.229 2.418 - 2.430: 97.1347% ( 26) 00:19:22.229 2.430 - 2.441: 97.3664% ( 30) 00:19:22.229 2.441 - 2.453: 97.5131% ( 19) 00:19:22.229 2.453 - 2.465: 97.6985% ( 24) 00:19:22.229 2.465 - 2.477: 97.8375% ( 18) 00:19:22.229 2.477 - 2.489: 98.0151% ( 23) 00:19:22.229 2.489 - 2.501: 98.1464% ( 17) 00:19:22.229 2.501 - 2.513: 98.2159% ( 9) 00:19:22.229 2.513 - 2.524: 98.2623% ( 6) 00:19:22.229 2.524 - 2.536: 98.3009% ( 5) 00:19:22.229 2.536 - 2.548: 98.3318% ( 4) 00:19:22.229 2.548 - 2.560: 98.3395% ( 1) 00:19:22.229 2.560 - 2.572: 98.3859% ( 6) 00:19:22.229 2.572 - 2.584: 98.4090% ( 3) 00:19:22.229 2.584 - 2.596: 98.4167% ( 1) 00:19:22.229 2.607 - 2.619: 98.4322% ( 2) 00:19:22.229 2.631 - 2.643: 98.4399% ( 1) 00:19:22.229 2.643 - 2.655: 98.4476% ( 1) 00:19:22.229 3.461 - 3.484: 98.4554% ( 1) 00:19:22.229 3.484 - 3.508: 98.4708% ( 2) 00:19:22.229 3.508 - 3.532: 98.4863% ( 2) 00:19:22.229 3.532 - 3.556: 98.5017% ( 2) 00:19:22.229 3.579 - 3.603: 98.5171% ( 2) 00:19:22.229 3.603 - 3.627: 98.5326% ( 2) 00:19:22.229 3.627 - 3.650: 98.5635% ( 4) 00:19:22.229 3.650 - 3.674: 98.5944% ( 4) 00:19:22.229 3.674 - 3.698: 98.6021% ( 1) 00:19:22.229 3.698 - 3.721: 98.6098% ( 1) 00:19:22.229 3.721 - 3.745: 98.6175% ( 1) 00:19:22.229 3.745 - 3.769: 98.6253% ( 1) 00:19:22.229 3.793 - 3.816: 98.6330% ( 1) 00:19:22.229 3.840 - 3.864: 98.6407% ( 1) 00:19:22.229 3.864 - 3.887: 98.6484% ( 1) 00:19:22.229 3.887 - 3.911: 98.6639% ( 2) 00:19:22.229 3.911 - 3.935: 98.6716% ( 1) 00:19:22.229 3.935 - 3.959: 98.6793% ( 1) 00:19:22.229 4.006 - 4.030: 98.7025% ( 3) 00:19:22.229 5.926 - 5.950: 98.7102% ( 1) 00:19:22.229 5.950 - 5.973: 98.7179% ( 1) 00:19:22.229 6.021 - 6.044: 98.7257% ( 1) 00:19:22.229 6.210 - 6.258: 98.7334% ( 1) 00:19:22.229 6.779 - 6.827: 98.7411% ( 1) 00:19:22.229 6.827 - 6.874: 98.7488% ( 1) 00:19:22.229 7.016 - 7.064: 98.7566% ( 1) 00:19:22.229 7.206 - 7.253: 98.7643% ( 1) 00:19:22.229 8.012 - 8.059: 98.7720% ( 1) 00:19:22.229 8.059 - 8.107: 98.7797% ( 1) 00:19:22.229 8.439 - 8.486: 98.7875% ( 1) 00:19:22.229 8.486 - 8.533: 98.7952% ( 1) 00:19:22.229 8.770 - 8.818: 98.8029% ( 1) 00:19:22.229 8.865 - 8.913: 98.8106% ( 1) 00:19:22.229 9.007 - 9.055: 98.8261% ( 2) 00:19:22.229 9.671 - 9.719: 9[2024-11-19 16:25:12.467848] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:22.229 8.8338% ( 1) 00:19:22.229 10.667 - 10.714: 98.8415% ( 1) 00:19:22.229 10.856 - 10.904: 98.8492% ( 1) 00:19:22.229 12.231 - 12.326: 98.8570% ( 1) 00:19:22.229 14.791 - 14.886: 98.8647% ( 1) 00:19:22.229 15.550 - 15.644: 98.8724% ( 1) 00:19:22.229 15.644 - 15.739: 98.8956% ( 3) 00:19:22.229 15.739 - 15.834: 98.9188% ( 3) 00:19:22.229 15.834 - 15.929: 98.9419% ( 3) 00:19:22.229 15.929 - 16.024: 98.9651% ( 3) 00:19:22.229 16.024 - 16.119: 99.0192% ( 7) 00:19:22.229 16.119 - 16.213: 99.0423% ( 3) 00:19:22.229 16.213 - 16.308: 99.0500% ( 1) 00:19:22.229 16.308 - 16.403: 99.1041% ( 7) 00:19:22.229 16.403 - 16.498: 99.1659% ( 8) 00:19:22.229 16.498 - 16.593: 99.1891% ( 3) 00:19:22.229 16.593 - 16.687: 99.2200% ( 4) 00:19:22.229 16.687 - 16.782: 99.2508% ( 4) 00:19:22.229 16.782 - 16.877: 99.2663% ( 2) 00:19:22.229 16.877 - 16.972: 99.3049% ( 5) 00:19:22.229 16.972 - 17.067: 99.3204% ( 2) 00:19:22.229 17.067 - 17.161: 99.3281% ( 1) 00:19:22.229 17.351 - 17.446: 99.3358% ( 1) 00:19:22.229 17.730 - 17.825: 99.3435% ( 1) 00:19:22.229 18.110 - 18.204: 99.3513% ( 1) 00:19:22.229 18.773 - 18.868: 99.3590% ( 1) 00:19:22.229 20.385 - 20.480: 99.3667% ( 1) 00:19:22.229 22.756 - 22.850: 99.3744% ( 1) 00:19:22.229 3980.705 - 4004.978: 99.9150% ( 70) 00:19:22.229 4004.978 - 4029.250: 100.0000% ( 11) 00:19:22.229 00:19:22.229 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:22.229 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:22.229 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:22.229 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:22.229 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:22.489 [ 00:19:22.489 { 00:19:22.489 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:22.489 "subtype": "Discovery", 00:19:22.489 "listen_addresses": [], 00:19:22.489 "allow_any_host": true, 00:19:22.489 "hosts": [] 00:19:22.489 }, 00:19:22.489 { 00:19:22.489 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:22.489 "subtype": "NVMe", 00:19:22.489 "listen_addresses": [ 00:19:22.489 { 00:19:22.489 "trtype": "VFIOUSER", 00:19:22.489 "adrfam": "IPv4", 00:19:22.489 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:22.489 "trsvcid": "0" 00:19:22.489 } 00:19:22.489 ], 00:19:22.489 "allow_any_host": true, 00:19:22.489 "hosts": [], 00:19:22.489 "serial_number": "SPDK1", 00:19:22.489 "model_number": "SPDK bdev Controller", 00:19:22.489 "max_namespaces": 32, 00:19:22.489 "min_cntlid": 1, 00:19:22.489 "max_cntlid": 65519, 00:19:22.489 "namespaces": [ 00:19:22.489 { 00:19:22.489 "nsid": 1, 00:19:22.489 "bdev_name": "Malloc1", 00:19:22.489 "name": "Malloc1", 00:19:22.489 "nguid": "973799FCD21242BD86CFDC25CCD52000", 00:19:22.489 "uuid": "973799fc-d212-42bd-86cf-dc25ccd52000" 00:19:22.489 }, 00:19:22.489 { 00:19:22.489 "nsid": 2, 00:19:22.489 "bdev_name": "Malloc3", 00:19:22.489 "name": "Malloc3", 00:19:22.489 "nguid": "7D4441EF17AC41B092182019A35A89D4", 00:19:22.489 "uuid": "7d4441ef-17ac-41b0-9218-2019a35a89d4" 00:19:22.489 } 00:19:22.489 ] 00:19:22.489 }, 00:19:22.489 { 00:19:22.489 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:22.489 "subtype": "NVMe", 00:19:22.489 "listen_addresses": [ 00:19:22.489 { 00:19:22.489 "trtype": "VFIOUSER", 00:19:22.489 "adrfam": "IPv4", 00:19:22.489 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:22.489 "trsvcid": "0" 00:19:22.489 } 00:19:22.489 ], 00:19:22.489 "allow_any_host": true, 00:19:22.489 "hosts": [], 00:19:22.489 "serial_number": "SPDK2", 00:19:22.489 "model_number": "SPDK bdev Controller", 00:19:22.489 "max_namespaces": 32, 00:19:22.489 "min_cntlid": 1, 00:19:22.489 "max_cntlid": 65519, 00:19:22.489 "namespaces": [ 00:19:22.489 { 00:19:22.489 "nsid": 1, 00:19:22.489 "bdev_name": "Malloc2", 00:19:22.489 "name": "Malloc2", 00:19:22.489 "nguid": "CE94B18FC46144D2927CB77881B259F3", 00:19:22.489 "uuid": "ce94b18f-c461-44d2-927c-b77881b259f3" 00:19:22.489 } 00:19:22.489 ] 00:19:22.489 } 00:19:22.489 ] 00:19:22.489 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:22.489 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=233907 00:19:22.489 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:22.489 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:22.489 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:22.489 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:22.489 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:22.489 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:19:22.489 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:22.748 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:22.748 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:22.748 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:19:22.748 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:22.748 [2024-11-19 16:25:12.960534] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:22.748 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:22.748 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:22.748 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:22.748 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:22.748 16:25:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:23.006 Malloc4 00:19:23.006 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:23.266 [2024-11-19 16:25:13.553968] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:23.266 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:23.266 Asynchronous Event Request test 00:19:23.266 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:23.266 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:23.266 Registering asynchronous event callbacks... 00:19:23.266 Starting namespace attribute notice tests for all controllers... 00:19:23.266 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:23.266 aer_cb - Changed Namespace 00:19:23.266 Cleaning up... 00:19:23.525 [ 00:19:23.525 { 00:19:23.525 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:23.525 "subtype": "Discovery", 00:19:23.525 "listen_addresses": [], 00:19:23.525 "allow_any_host": true, 00:19:23.525 "hosts": [] 00:19:23.525 }, 00:19:23.525 { 00:19:23.525 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:23.525 "subtype": "NVMe", 00:19:23.525 "listen_addresses": [ 00:19:23.525 { 00:19:23.525 "trtype": "VFIOUSER", 00:19:23.525 "adrfam": "IPv4", 00:19:23.525 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:23.525 "trsvcid": "0" 00:19:23.525 } 00:19:23.525 ], 00:19:23.525 "allow_any_host": true, 00:19:23.525 "hosts": [], 00:19:23.525 "serial_number": "SPDK1", 00:19:23.525 "model_number": "SPDK bdev Controller", 00:19:23.525 "max_namespaces": 32, 00:19:23.525 "min_cntlid": 1, 00:19:23.525 "max_cntlid": 65519, 00:19:23.525 "namespaces": [ 00:19:23.525 { 00:19:23.525 "nsid": 1, 00:19:23.525 "bdev_name": "Malloc1", 00:19:23.525 "name": "Malloc1", 00:19:23.525 "nguid": "973799FCD21242BD86CFDC25CCD52000", 00:19:23.525 "uuid": "973799fc-d212-42bd-86cf-dc25ccd52000" 00:19:23.525 }, 00:19:23.525 { 00:19:23.525 "nsid": 2, 00:19:23.525 "bdev_name": "Malloc3", 00:19:23.525 "name": "Malloc3", 00:19:23.525 "nguid": "7D4441EF17AC41B092182019A35A89D4", 00:19:23.525 "uuid": "7d4441ef-17ac-41b0-9218-2019a35a89d4" 00:19:23.525 } 00:19:23.525 ] 00:19:23.525 }, 00:19:23.525 { 00:19:23.525 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:23.525 "subtype": "NVMe", 00:19:23.525 "listen_addresses": [ 00:19:23.525 { 00:19:23.525 "trtype": "VFIOUSER", 00:19:23.525 "adrfam": "IPv4", 00:19:23.525 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:23.525 "trsvcid": "0" 00:19:23.525 } 00:19:23.526 ], 00:19:23.526 "allow_any_host": true, 00:19:23.526 "hosts": [], 00:19:23.526 "serial_number": "SPDK2", 00:19:23.526 "model_number": "SPDK bdev Controller", 00:19:23.526 "max_namespaces": 32, 00:19:23.526 "min_cntlid": 1, 00:19:23.526 "max_cntlid": 65519, 00:19:23.526 "namespaces": [ 00:19:23.526 { 00:19:23.526 "nsid": 1, 00:19:23.526 "bdev_name": "Malloc2", 00:19:23.526 "name": "Malloc2", 00:19:23.526 "nguid": "CE94B18FC46144D2927CB77881B259F3", 00:19:23.526 "uuid": "ce94b18f-c461-44d2-927c-b77881b259f3" 00:19:23.526 }, 00:19:23.526 { 00:19:23.526 "nsid": 2, 00:19:23.526 "bdev_name": "Malloc4", 00:19:23.526 "name": "Malloc4", 00:19:23.526 "nguid": "CC9AD0706A544CF2B8BD9BEB9136B750", 00:19:23.526 "uuid": "cc9ad070-6a54-4cf2-b8bd-9beb9136b750" 00:19:23.526 } 00:19:23.526 ] 00:19:23.526 } 00:19:23.526 ] 00:19:23.526 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 233907 00:19:23.526 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:23.526 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 228320 00:19:23.526 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 228320 ']' 00:19:23.526 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 228320 00:19:23.526 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:23.526 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.526 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 228320 00:19:23.785 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:23.785 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:23.785 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 228320' 00:19:23.785 killing process with pid 228320 00:19:23.785 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 228320 00:19:23.785 16:25:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 228320 00:19:24.044 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:24.044 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:24.044 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:24.044 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:24.044 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=234057 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 234057' 00:19:24.045 Process pid: 234057 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 234057 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 234057 ']' 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:24.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:24.045 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:24.045 [2024-11-19 16:25:14.221746] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:24.045 [2024-11-19 16:25:14.222759] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:19:24.045 [2024-11-19 16:25:14.222812] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.045 [2024-11-19 16:25:14.288663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:24.045 [2024-11-19 16:25:14.334903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.045 [2024-11-19 16:25:14.334956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.045 [2024-11-19 16:25:14.334983] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.045 [2024-11-19 16:25:14.334994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.045 [2024-11-19 16:25:14.335003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.045 [2024-11-19 16:25:14.339091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.045 [2024-11-19 16:25:14.339157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.045 [2024-11-19 16:25:14.339222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:24.045 [2024-11-19 16:25:14.339226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.303 [2024-11-19 16:25:14.421785] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:24.303 [2024-11-19 16:25:14.422102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:24.303 [2024-11-19 16:25:14.422306] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:24.303 [2024-11-19 16:25:14.422857] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:24.303 [2024-11-19 16:25:14.423129] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:24.303 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.303 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:24.303 16:25:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:25.244 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:25.503 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:25.503 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:25.503 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:25.503 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:25.503 16:25:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:25.761 Malloc1 00:19:25.761 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:26.328 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:26.587 16:25:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:26.846 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:26.846 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:26.846 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:27.105 Malloc2 00:19:27.105 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:27.363 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:27.622 16:25:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 234057 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 234057 ']' 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 234057 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234057 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234057' 00:19:27.881 killing process with pid 234057 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 234057 00:19:27.881 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 234057 00:19:28.140 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:28.140 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:28.140 00:19:28.140 real 0m53.762s 00:19:28.140 user 3m27.913s 00:19:28.140 sys 0m3.880s 00:19:28.140 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:28.140 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:28.140 ************************************ 00:19:28.140 END TEST nvmf_vfio_user 00:19:28.140 ************************************ 00:19:28.140 16:25:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:28.140 16:25:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:28.140 16:25:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:28.140 16:25:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:28.398 ************************************ 00:19:28.398 START TEST nvmf_vfio_user_nvme_compliance 00:19:28.398 ************************************ 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:28.398 * Looking for test storage... 00:19:28.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:28.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.398 --rc genhtml_branch_coverage=1 00:19:28.398 --rc genhtml_function_coverage=1 00:19:28.398 --rc genhtml_legend=1 00:19:28.398 --rc geninfo_all_blocks=1 00:19:28.398 --rc geninfo_unexecuted_blocks=1 00:19:28.398 00:19:28.398 ' 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:28.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.398 --rc genhtml_branch_coverage=1 00:19:28.398 --rc genhtml_function_coverage=1 00:19:28.398 --rc genhtml_legend=1 00:19:28.398 --rc geninfo_all_blocks=1 00:19:28.398 --rc geninfo_unexecuted_blocks=1 00:19:28.398 00:19:28.398 ' 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:28.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.398 --rc genhtml_branch_coverage=1 00:19:28.398 --rc genhtml_function_coverage=1 00:19:28.398 --rc genhtml_legend=1 00:19:28.398 --rc geninfo_all_blocks=1 00:19:28.398 --rc geninfo_unexecuted_blocks=1 00:19:28.398 00:19:28.398 ' 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:28.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:28.398 --rc genhtml_branch_coverage=1 00:19:28.398 --rc genhtml_function_coverage=1 00:19:28.398 --rc genhtml_legend=1 00:19:28.398 --rc geninfo_all_blocks=1 00:19:28.398 --rc geninfo_unexecuted_blocks=1 00:19:28.398 00:19:28.398 ' 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.398 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:28.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=234660 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 234660' 00:19:28.399 Process pid: 234660 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 234660 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 234660 ']' 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.399 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:28.399 [2024-11-19 16:25:18.723441] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:19:28.399 [2024-11-19 16:25:18.723542] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.658 [2024-11-19 16:25:18.794899] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:28.658 [2024-11-19 16:25:18.846010] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.658 [2024-11-19 16:25:18.846062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.658 [2024-11-19 16:25:18.846098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.658 [2024-11-19 16:25:18.846110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.658 [2024-11-19 16:25:18.846119] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.658 [2024-11-19 16:25:18.847635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:28.658 [2024-11-19 16:25:18.847664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:28.658 [2024-11-19 16:25:18.847668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.658 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.658 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:28.658 16:25:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:30.042 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:30.042 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:30.042 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:30.042 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.042 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:30.042 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.042 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:30.042 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:30.042 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.042 16:25:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:30.042 malloc0 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.042 16:25:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:30.042 00:19:30.042 00:19:30.042 CUnit - A unit testing framework for C - Version 2.1-3 00:19:30.042 http://cunit.sourceforge.net/ 00:19:30.042 00:19:30.042 00:19:30.042 Suite: nvme_compliance 00:19:30.042 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-19 16:25:20.231614] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.042 [2024-11-19 16:25:20.233089] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:30.042 [2024-11-19 16:25:20.233129] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:30.042 [2024-11-19 16:25:20.233143] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:30.042 [2024-11-19 16:25:20.236642] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.042 passed 00:19:30.042 Test: admin_identify_ctrlr_verify_fused ...[2024-11-19 16:25:20.321252] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.042 [2024-11-19 16:25:20.324270] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.042 passed 00:19:30.301 Test: admin_identify_ns ...[2024-11-19 16:25:20.412686] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.301 [2024-11-19 16:25:20.470088] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:30.301 [2024-11-19 16:25:20.478103] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:30.301 [2024-11-19 16:25:20.502230] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.301 passed 00:19:30.301 Test: admin_get_features_mandatory_features ...[2024-11-19 16:25:20.582196] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.302 [2024-11-19 16:25:20.585216] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.302 passed 00:19:30.560 Test: admin_get_features_optional_features ...[2024-11-19 16:25:20.668754] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.560 [2024-11-19 16:25:20.674788] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.560 passed 00:19:30.560 Test: admin_set_features_number_of_queues ...[2024-11-19 16:25:20.757985] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.560 [2024-11-19 16:25:20.868190] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.821 passed 00:19:30.821 Test: admin_get_log_page_mandatory_logs ...[2024-11-19 16:25:20.951621] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.821 [2024-11-19 16:25:20.954641] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.821 passed 00:19:30.821 Test: admin_get_log_page_with_lpo ...[2024-11-19 16:25:21.036998] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:30.821 [2024-11-19 16:25:21.106086] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:30.821 [2024-11-19 16:25:21.119172] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:30.821 passed 00:19:31.080 Test: fabric_property_get ...[2024-11-19 16:25:21.202899] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.080 [2024-11-19 16:25:21.204213] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:31.080 [2024-11-19 16:25:21.205916] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.080 passed 00:19:31.080 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-19 16:25:21.288451] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.080 [2024-11-19 16:25:21.289752] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:31.080 [2024-11-19 16:25:21.291482] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.080 passed 00:19:31.080 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-19 16:25:21.376732] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.338 [2024-11-19 16:25:21.461085] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:31.338 [2024-11-19 16:25:21.477081] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:31.338 [2024-11-19 16:25:21.482193] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.338 passed 00:19:31.338 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-19 16:25:21.562753] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.338 [2024-11-19 16:25:21.564082] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:31.338 [2024-11-19 16:25:21.565773] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.338 passed 00:19:31.338 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-19 16:25:21.650687] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.597 [2024-11-19 16:25:21.725082] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:31.597 [2024-11-19 16:25:21.749097] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:31.597 [2024-11-19 16:25:21.754176] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.597 passed 00:19:31.597 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-19 16:25:21.837224] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.597 [2024-11-19 16:25:21.838543] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:31.597 [2024-11-19 16:25:21.838594] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:31.597 [2024-11-19 16:25:21.840250] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.597 passed 00:19:31.597 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-19 16:25:21.922676] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.855 [2024-11-19 16:25:22.018077] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:31.855 [2024-11-19 16:25:22.026081] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:31.855 [2024-11-19 16:25:22.034094] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:31.855 [2024-11-19 16:25:22.042096] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:31.855 [2024-11-19 16:25:22.071194] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:31.855 passed 00:19:31.855 Test: admin_create_io_sq_verify_pc ...[2024-11-19 16:25:22.150890] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:31.855 [2024-11-19 16:25:22.167095] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:31.855 [2024-11-19 16:25:22.184353] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:32.113 passed 00:19:32.114 Test: admin_create_io_qp_max_qps ...[2024-11-19 16:25:22.270936] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.057 [2024-11-19 16:25:23.377089] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:33.625 [2024-11-19 16:25:23.755934] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.625 passed 00:19:33.625 Test: admin_create_io_sq_shared_cq ...[2024-11-19 16:25:23.838285] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:33.885 [2024-11-19 16:25:23.970080] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:33.885 [2024-11-19 16:25:24.002168] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:33.885 passed 00:19:33.885 00:19:33.885 Run Summary: Type Total Ran Passed Failed Inactive 00:19:33.885 suites 1 1 n/a 0 0 00:19:33.885 tests 18 18 18 0 0 00:19:33.885 asserts 360 360 360 0 n/a 00:19:33.885 00:19:33.885 Elapsed time = 1.561 seconds 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 234660 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 234660 ']' 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 234660 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 234660 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 234660' 00:19:33.885 killing process with pid 234660 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 234660 00:19:33.885 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 234660 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:34.145 00:19:34.145 real 0m5.796s 00:19:34.145 user 0m16.264s 00:19:34.145 sys 0m0.575s 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:34.145 ************************************ 00:19:34.145 END TEST nvmf_vfio_user_nvme_compliance 00:19:34.145 ************************************ 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:34.145 ************************************ 00:19:34.145 START TEST nvmf_vfio_user_fuzz 00:19:34.145 ************************************ 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:34.145 * Looking for test storage... 00:19:34.145 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:19:34.145 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:34.406 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:34.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.407 --rc genhtml_branch_coverage=1 00:19:34.407 --rc genhtml_function_coverage=1 00:19:34.407 --rc genhtml_legend=1 00:19:34.407 --rc geninfo_all_blocks=1 00:19:34.407 --rc geninfo_unexecuted_blocks=1 00:19:34.407 00:19:34.407 ' 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:34.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.407 --rc genhtml_branch_coverage=1 00:19:34.407 --rc genhtml_function_coverage=1 00:19:34.407 --rc genhtml_legend=1 00:19:34.407 --rc geninfo_all_blocks=1 00:19:34.407 --rc geninfo_unexecuted_blocks=1 00:19:34.407 00:19:34.407 ' 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:34.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.407 --rc genhtml_branch_coverage=1 00:19:34.407 --rc genhtml_function_coverage=1 00:19:34.407 --rc genhtml_legend=1 00:19:34.407 --rc geninfo_all_blocks=1 00:19:34.407 --rc geninfo_unexecuted_blocks=1 00:19:34.407 00:19:34.407 ' 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:34.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:34.407 --rc genhtml_branch_coverage=1 00:19:34.407 --rc genhtml_function_coverage=1 00:19:34.407 --rc genhtml_legend=1 00:19:34.407 --rc geninfo_all_blocks=1 00:19:34.407 --rc geninfo_unexecuted_blocks=1 00:19:34.407 00:19:34.407 ' 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:34.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:34.407 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=235391 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 235391' 00:19:34.408 Process pid: 235391 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 235391 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 235391 ']' 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:34.408 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:34.668 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.668 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:34.668 16:25:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:35.609 malloc0 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:35.609 16:25:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:07.689 Fuzzing completed. Shutting down the fuzz application 00:20:07.689 00:20:07.689 Dumping successful admin opcodes: 00:20:07.689 8, 9, 10, 24, 00:20:07.689 Dumping successful io opcodes: 00:20:07.689 0, 00:20:07.689 NS: 0x20000081ef00 I/O qp, Total commands completed: 666580, total successful commands: 2601, random_seed: 408047424 00:20:07.689 NS: 0x20000081ef00 admin qp, Total commands completed: 85554, total successful commands: 682, random_seed: 342175616 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 235391 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 235391 ']' 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 235391 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 235391 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 235391' 00:20:07.689 killing process with pid 235391 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 235391 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 235391 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:07.689 00:20:07.689 real 0m32.181s 00:20:07.689 user 0m30.530s 00:20:07.689 sys 0m29.208s 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:07.689 ************************************ 00:20:07.689 END TEST nvmf_vfio_user_fuzz 00:20:07.689 ************************************ 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:07.689 ************************************ 00:20:07.689 START TEST nvmf_auth_target 00:20:07.689 ************************************ 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:07.689 * Looking for test storage... 00:20:07.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.689 --rc genhtml_branch_coverage=1 00:20:07.689 --rc genhtml_function_coverage=1 00:20:07.689 --rc genhtml_legend=1 00:20:07.689 --rc geninfo_all_blocks=1 00:20:07.689 --rc geninfo_unexecuted_blocks=1 00:20:07.689 00:20:07.689 ' 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.689 --rc genhtml_branch_coverage=1 00:20:07.689 --rc genhtml_function_coverage=1 00:20:07.689 --rc genhtml_legend=1 00:20:07.689 --rc geninfo_all_blocks=1 00:20:07.689 --rc geninfo_unexecuted_blocks=1 00:20:07.689 00:20:07.689 ' 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.689 --rc genhtml_branch_coverage=1 00:20:07.689 --rc genhtml_function_coverage=1 00:20:07.689 --rc genhtml_legend=1 00:20:07.689 --rc geninfo_all_blocks=1 00:20:07.689 --rc geninfo_unexecuted_blocks=1 00:20:07.689 00:20:07.689 ' 00:20:07.689 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:07.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.689 --rc genhtml_branch_coverage=1 00:20:07.689 --rc genhtml_function_coverage=1 00:20:07.689 --rc genhtml_legend=1 00:20:07.689 --rc geninfo_all_blocks=1 00:20:07.689 --rc geninfo_unexecuted_blocks=1 00:20:07.689 00:20:07.689 ' 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:07.690 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:07.690 16:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.628 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:08.629 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:08.629 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:08.629 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:08.629 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.629 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.888 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.888 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.888 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:08.888 16:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:08.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:20:08.888 00:20:08.888 --- 10.0.0.2 ping statistics --- 00:20:08.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.888 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:20:08.888 00:20:08.888 --- 10.0.0.1 ping statistics --- 00:20:08.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.888 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=240836 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 240836 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 240836 ']' 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.888 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=240856 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bc2731715de0d7620a7936dd43a76b5b611669950bc9693b 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kov 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bc2731715de0d7620a7936dd43a76b5b611669950bc9693b 0 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bc2731715de0d7620a7936dd43a76b5b611669950bc9693b 0 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bc2731715de0d7620a7936dd43a76b5b611669950bc9693b 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:09.147 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kov 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kov 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.kov 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=aafef6e003bb72d5dc473b1a6e47a3a9d2d0d7d5480190d9bf6f041908631849 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.3a3 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key aafef6e003bb72d5dc473b1a6e47a3a9d2d0d7d5480190d9bf6f041908631849 3 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 aafef6e003bb72d5dc473b1a6e47a3a9d2d0d7d5480190d9bf6f041908631849 3 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=aafef6e003bb72d5dc473b1a6e47a3a9d2d0d7d5480190d9bf6f041908631849 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.3a3 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.3a3 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.3a3 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d455be71dc33ed77bd6b21f8cf125904 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.YXu 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d455be71dc33ed77bd6b21f8cf125904 1 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d455be71dc33ed77bd6b21f8cf125904 1 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d455be71dc33ed77bd6b21f8cf125904 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.YXu 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.YXu 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.YXu 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1b26daf35188fbc59762cc7097826987efc539d3b5a95414 00:20:09.407 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.A7K 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1b26daf35188fbc59762cc7097826987efc539d3b5a95414 2 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1b26daf35188fbc59762cc7097826987efc539d3b5a95414 2 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1b26daf35188fbc59762cc7097826987efc539d3b5a95414 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.A7K 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.A7K 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.A7K 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=12baeef3005501f5c15a84259f7086373aaeaa489fc0dfd2 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.naD 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 12baeef3005501f5c15a84259f7086373aaeaa489fc0dfd2 2 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 12baeef3005501f5c15a84259f7086373aaeaa489fc0dfd2 2 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=12baeef3005501f5c15a84259f7086373aaeaa489fc0dfd2 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.naD 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.naD 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.naD 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=31b3d4f2470e84f1e801cadb87bc20cf 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.z48 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 31b3d4f2470e84f1e801cadb87bc20cf 1 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 31b3d4f2470e84f1e801cadb87bc20cf 1 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=31b3d4f2470e84f1e801cadb87bc20cf 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:09.408 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.z48 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.z48 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.z48 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=960ceab1be59596675ed6e08b81f28be09fb01e2fd8ffa0f09f5547528ff99f1 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.kTA 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 960ceab1be59596675ed6e08b81f28be09fb01e2fd8ffa0f09f5547528ff99f1 3 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 960ceab1be59596675ed6e08b81f28be09fb01e2fd8ffa0f09f5547528ff99f1 3 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=960ceab1be59596675ed6e08b81f28be09fb01e2fd8ffa0f09f5547528ff99f1 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.kTA 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.kTA 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.kTA 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 240836 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 240836 ']' 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:09.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.667 16:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.926 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.926 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:09.926 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 240856 /var/tmp/host.sock 00:20:09.926 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 240856 ']' 00:20:09.926 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:09.926 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:09.926 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:09.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:09.926 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:09.926 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kov 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.kov 00:20:10.185 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.kov 00:20:10.443 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.3a3 ]] 00:20:10.443 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3a3 00:20:10.443 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.443 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.443 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.443 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3a3 00:20:10.443 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3a3 00:20:10.702 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:10.702 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YXu 00:20:10.702 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.702 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.702 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.702 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.YXu 00:20:10.702 16:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.YXu 00:20:10.960 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.A7K ]] 00:20:10.960 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.A7K 00:20:10.960 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.960 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.960 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.960 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.A7K 00:20:10.960 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.A7K 00:20:11.219 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:11.219 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.naD 00:20:11.219 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.219 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.219 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.219 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.naD 00:20:11.219 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.naD 00:20:11.478 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.z48 ]] 00:20:11.478 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z48 00:20:11.478 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.478 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.737 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.737 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z48 00:20:11.737 16:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z48 00:20:11.995 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:11.995 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kTA 00:20:11.995 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.995 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.995 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.995 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.kTA 00:20:11.995 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.kTA 00:20:12.253 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:12.253 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:12.253 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.253 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.253 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.253 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.511 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.512 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.512 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.770 00:20:12.770 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.770 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.770 16:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.029 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.029 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.029 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.029 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.029 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.029 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.029 { 00:20:13.029 "cntlid": 1, 00:20:13.029 "qid": 0, 00:20:13.029 "state": "enabled", 00:20:13.029 "thread": "nvmf_tgt_poll_group_000", 00:20:13.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:13.029 "listen_address": { 00:20:13.029 "trtype": "TCP", 00:20:13.029 "adrfam": "IPv4", 00:20:13.029 "traddr": "10.0.0.2", 00:20:13.029 "trsvcid": "4420" 00:20:13.029 }, 00:20:13.029 "peer_address": { 00:20:13.029 "trtype": "TCP", 00:20:13.029 "adrfam": "IPv4", 00:20:13.029 "traddr": "10.0.0.1", 00:20:13.030 "trsvcid": "57900" 00:20:13.030 }, 00:20:13.030 "auth": { 00:20:13.030 "state": "completed", 00:20:13.030 "digest": "sha256", 00:20:13.030 "dhgroup": "null" 00:20:13.030 } 00:20:13.030 } 00:20:13.030 ]' 00:20:13.030 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.030 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.030 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.030 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.030 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.030 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.030 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.030 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.288 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:20:13.288 16:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.557 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:18.557 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.557 16:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.815 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.815 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.815 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.815 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.815 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.815 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.815 { 00:20:18.815 "cntlid": 3, 00:20:18.815 "qid": 0, 00:20:18.815 "state": "enabled", 00:20:18.815 "thread": "nvmf_tgt_poll_group_000", 00:20:18.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:18.815 "listen_address": { 00:20:18.815 "trtype": "TCP", 00:20:18.815 "adrfam": "IPv4", 00:20:18.815 "traddr": "10.0.0.2", 00:20:18.815 "trsvcid": "4420" 00:20:18.815 }, 00:20:18.815 "peer_address": { 00:20:18.815 "trtype": "TCP", 00:20:18.815 "adrfam": "IPv4", 00:20:18.815 "traddr": "10.0.0.1", 00:20:18.815 "trsvcid": "57924" 00:20:18.815 }, 00:20:18.815 "auth": { 00:20:18.815 "state": "completed", 00:20:18.815 "digest": "sha256", 00:20:18.815 "dhgroup": "null" 00:20:18.815 } 00:20:18.816 } 00:20:18.816 ]' 00:20:18.816 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.816 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.816 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.816 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:18.816 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.076 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.076 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.076 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.336 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:20:19.336 16:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:20:20.274 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.274 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.274 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.274 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.274 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.274 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.274 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.274 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.274 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.533 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:20.791 00:20:20.791 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:20.791 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:20.791 16:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.049 { 00:20:21.049 "cntlid": 5, 00:20:21.049 "qid": 0, 00:20:21.049 "state": "enabled", 00:20:21.049 "thread": "nvmf_tgt_poll_group_000", 00:20:21.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:21.049 "listen_address": { 00:20:21.049 "trtype": "TCP", 00:20:21.049 "adrfam": "IPv4", 00:20:21.049 "traddr": "10.0.0.2", 00:20:21.049 "trsvcid": "4420" 00:20:21.049 }, 00:20:21.049 "peer_address": { 00:20:21.049 "trtype": "TCP", 00:20:21.049 "adrfam": "IPv4", 00:20:21.049 "traddr": "10.0.0.1", 00:20:21.049 "trsvcid": "36884" 00:20:21.049 }, 00:20:21.049 "auth": { 00:20:21.049 "state": "completed", 00:20:21.049 "digest": "sha256", 00:20:21.049 "dhgroup": "null" 00:20:21.049 } 00:20:21.049 } 00:20:21.049 ]' 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.049 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.617 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:20:21.617 16:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:20:22.554 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.554 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.554 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:22.554 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.555 16:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:23.124 00:20:23.124 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.124 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.124 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.383 { 00:20:23.383 "cntlid": 7, 00:20:23.383 "qid": 0, 00:20:23.383 "state": "enabled", 00:20:23.383 "thread": "nvmf_tgt_poll_group_000", 00:20:23.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:23.383 "listen_address": { 00:20:23.383 "trtype": "TCP", 00:20:23.383 "adrfam": "IPv4", 00:20:23.383 "traddr": "10.0.0.2", 00:20:23.383 "trsvcid": "4420" 00:20:23.383 }, 00:20:23.383 "peer_address": { 00:20:23.383 "trtype": "TCP", 00:20:23.383 "adrfam": "IPv4", 00:20:23.383 "traddr": "10.0.0.1", 00:20:23.383 "trsvcid": "36928" 00:20:23.383 }, 00:20:23.383 "auth": { 00:20:23.383 "state": "completed", 00:20:23.383 "digest": "sha256", 00:20:23.383 "dhgroup": "null" 00:20:23.383 } 00:20:23.383 } 00:20:23.383 ]' 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.383 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.642 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:20:23.642 16:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:20:24.576 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.576 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.576 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.576 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.576 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.576 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:24.576 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.576 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.576 16:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.834 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.092 00:20:25.092 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:25.092 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:25.092 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.351 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.351 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.351 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.351 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.351 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.351 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:25.351 { 00:20:25.351 "cntlid": 9, 00:20:25.351 "qid": 0, 00:20:25.351 "state": "enabled", 00:20:25.351 "thread": "nvmf_tgt_poll_group_000", 00:20:25.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:25.351 "listen_address": { 00:20:25.351 "trtype": "TCP", 00:20:25.351 "adrfam": "IPv4", 00:20:25.351 "traddr": "10.0.0.2", 00:20:25.351 "trsvcid": "4420" 00:20:25.351 }, 00:20:25.351 "peer_address": { 00:20:25.351 "trtype": "TCP", 00:20:25.351 "adrfam": "IPv4", 00:20:25.351 "traddr": "10.0.0.1", 00:20:25.351 "trsvcid": "36952" 00:20:25.351 }, 00:20:25.351 "auth": { 00:20:25.351 "state": "completed", 00:20:25.351 "digest": "sha256", 00:20:25.351 "dhgroup": "ffdhe2048" 00:20:25.351 } 00:20:25.351 } 00:20:25.351 ]' 00:20:25.351 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:25.609 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:25.609 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.609 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:25.609 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.609 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.609 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.609 16:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.868 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:20:25.868 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:20:26.807 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.807 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:26.807 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.807 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.807 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.807 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:26.807 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:26.807 16:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.066 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.324 00:20:27.324 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.324 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.324 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.583 { 00:20:27.583 "cntlid": 11, 00:20:27.583 "qid": 0, 00:20:27.583 "state": "enabled", 00:20:27.583 "thread": "nvmf_tgt_poll_group_000", 00:20:27.583 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:27.583 "listen_address": { 00:20:27.583 "trtype": "TCP", 00:20:27.583 "adrfam": "IPv4", 00:20:27.583 "traddr": "10.0.0.2", 00:20:27.583 "trsvcid": "4420" 00:20:27.583 }, 00:20:27.583 "peer_address": { 00:20:27.583 "trtype": "TCP", 00:20:27.583 "adrfam": "IPv4", 00:20:27.583 "traddr": "10.0.0.1", 00:20:27.583 "trsvcid": "36986" 00:20:27.583 }, 00:20:27.583 "auth": { 00:20:27.583 "state": "completed", 00:20:27.583 "digest": "sha256", 00:20:27.583 "dhgroup": "ffdhe2048" 00:20:27.583 } 00:20:27.583 } 00:20:27.583 ]' 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:27.583 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.842 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.842 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.842 16:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.102 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:20:28.102 16:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:20:29.042 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.042 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.042 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.042 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.042 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.042 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.042 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.042 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.042 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:29.300 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.301 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.559 00:20:29.559 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.559 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.559 16:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.818 { 00:20:29.818 "cntlid": 13, 00:20:29.818 "qid": 0, 00:20:29.818 "state": "enabled", 00:20:29.818 "thread": "nvmf_tgt_poll_group_000", 00:20:29.818 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:29.818 "listen_address": { 00:20:29.818 "trtype": "TCP", 00:20:29.818 "adrfam": "IPv4", 00:20:29.818 "traddr": "10.0.0.2", 00:20:29.818 "trsvcid": "4420" 00:20:29.818 }, 00:20:29.818 "peer_address": { 00:20:29.818 "trtype": "TCP", 00:20:29.818 "adrfam": "IPv4", 00:20:29.818 "traddr": "10.0.0.1", 00:20:29.818 "trsvcid": "58482" 00:20:29.818 }, 00:20:29.818 "auth": { 00:20:29.818 "state": "completed", 00:20:29.818 "digest": "sha256", 00:20:29.818 "dhgroup": "ffdhe2048" 00:20:29.818 } 00:20:29.818 } 00:20:29.818 ]' 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.818 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.388 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:20:30.388 16:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:20:30.956 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.214 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.214 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:31.214 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.215 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.215 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.215 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.215 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.215 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.473 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.732 00:20:31.732 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.732 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.732 16:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.991 { 00:20:31.991 "cntlid": 15, 00:20:31.991 "qid": 0, 00:20:31.991 "state": "enabled", 00:20:31.991 "thread": "nvmf_tgt_poll_group_000", 00:20:31.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:31.991 "listen_address": { 00:20:31.991 "trtype": "TCP", 00:20:31.991 "adrfam": "IPv4", 00:20:31.991 "traddr": "10.0.0.2", 00:20:31.991 "trsvcid": "4420" 00:20:31.991 }, 00:20:31.991 "peer_address": { 00:20:31.991 "trtype": "TCP", 00:20:31.991 "adrfam": "IPv4", 00:20:31.991 "traddr": "10.0.0.1", 00:20:31.991 "trsvcid": "58514" 00:20:31.991 }, 00:20:31.991 "auth": { 00:20:31.991 "state": "completed", 00:20:31.991 "digest": "sha256", 00:20:31.991 "dhgroup": "ffdhe2048" 00:20:31.991 } 00:20:31.991 } 00:20:31.991 ]' 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.991 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.558 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:20:32.558 16:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.498 16:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:34.065 00:20:34.065 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.065 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.065 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.323 { 00:20:34.323 "cntlid": 17, 00:20:34.323 "qid": 0, 00:20:34.323 "state": "enabled", 00:20:34.323 "thread": "nvmf_tgt_poll_group_000", 00:20:34.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:34.323 "listen_address": { 00:20:34.323 "trtype": "TCP", 00:20:34.323 "adrfam": "IPv4", 00:20:34.323 "traddr": "10.0.0.2", 00:20:34.323 "trsvcid": "4420" 00:20:34.323 }, 00:20:34.323 "peer_address": { 00:20:34.323 "trtype": "TCP", 00:20:34.323 "adrfam": "IPv4", 00:20:34.323 "traddr": "10.0.0.1", 00:20:34.323 "trsvcid": "58540" 00:20:34.323 }, 00:20:34.323 "auth": { 00:20:34.323 "state": "completed", 00:20:34.323 "digest": "sha256", 00:20:34.323 "dhgroup": "ffdhe3072" 00:20:34.323 } 00:20:34.323 } 00:20:34.323 ]' 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.323 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.583 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:20:34.583 16:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:20:35.521 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.521 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:35.521 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.521 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.521 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.521 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.521 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.521 16:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.780 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.349 00:20:36.349 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.349 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.349 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.349 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.349 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.349 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.349 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.609 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.609 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.609 { 00:20:36.609 "cntlid": 19, 00:20:36.609 "qid": 0, 00:20:36.609 "state": "enabled", 00:20:36.609 "thread": "nvmf_tgt_poll_group_000", 00:20:36.609 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:36.609 "listen_address": { 00:20:36.609 "trtype": "TCP", 00:20:36.609 "adrfam": "IPv4", 00:20:36.609 "traddr": "10.0.0.2", 00:20:36.609 "trsvcid": "4420" 00:20:36.609 }, 00:20:36.609 "peer_address": { 00:20:36.609 "trtype": "TCP", 00:20:36.609 "adrfam": "IPv4", 00:20:36.609 "traddr": "10.0.0.1", 00:20:36.609 "trsvcid": "58568" 00:20:36.609 }, 00:20:36.609 "auth": { 00:20:36.609 "state": "completed", 00:20:36.609 "digest": "sha256", 00:20:36.609 "dhgroup": "ffdhe3072" 00:20:36.609 } 00:20:36.609 } 00:20:36.609 ]' 00:20:36.609 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.609 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:36.609 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.609 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:36.609 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.609 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.609 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.609 16:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.868 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:20:36.868 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:20:37.806 16:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.806 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:37.806 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.806 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.806 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.806 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.806 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:37.806 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:38.064 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:38.064 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.064 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.064 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:38.064 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.064 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.064 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.065 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.065 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.065 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.065 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.065 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.065 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.632 00:20:38.632 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.632 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.632 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.890 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.890 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.890 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.890 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.890 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.890 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.890 { 00:20:38.890 "cntlid": 21, 00:20:38.890 "qid": 0, 00:20:38.890 "state": "enabled", 00:20:38.890 "thread": "nvmf_tgt_poll_group_000", 00:20:38.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:38.890 "listen_address": { 00:20:38.890 "trtype": "TCP", 00:20:38.890 "adrfam": "IPv4", 00:20:38.890 "traddr": "10.0.0.2", 00:20:38.890 "trsvcid": "4420" 00:20:38.890 }, 00:20:38.890 "peer_address": { 00:20:38.890 "trtype": "TCP", 00:20:38.890 "adrfam": "IPv4", 00:20:38.890 "traddr": "10.0.0.1", 00:20:38.890 "trsvcid": "58582" 00:20:38.890 }, 00:20:38.890 "auth": { 00:20:38.890 "state": "completed", 00:20:38.890 "digest": "sha256", 00:20:38.890 "dhgroup": "ffdhe3072" 00:20:38.891 } 00:20:38.891 } 00:20:38.891 ]' 00:20:38.891 16:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.891 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:38.891 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.891 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:38.891 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.891 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.891 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.891 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.148 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:20:39.148 16:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:20:40.081 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.082 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.082 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.082 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.082 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.082 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.082 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:40.082 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.340 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:40.597 00:20:40.597 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.597 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.597 16:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.163 { 00:20:41.163 "cntlid": 23, 00:20:41.163 "qid": 0, 00:20:41.163 "state": "enabled", 00:20:41.163 "thread": "nvmf_tgt_poll_group_000", 00:20:41.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:41.163 "listen_address": { 00:20:41.163 "trtype": "TCP", 00:20:41.163 "adrfam": "IPv4", 00:20:41.163 "traddr": "10.0.0.2", 00:20:41.163 "trsvcid": "4420" 00:20:41.163 }, 00:20:41.163 "peer_address": { 00:20:41.163 "trtype": "TCP", 00:20:41.163 "adrfam": "IPv4", 00:20:41.163 "traddr": "10.0.0.1", 00:20:41.163 "trsvcid": "51380" 00:20:41.163 }, 00:20:41.163 "auth": { 00:20:41.163 "state": "completed", 00:20:41.163 "digest": "sha256", 00:20:41.163 "dhgroup": "ffdhe3072" 00:20:41.163 } 00:20:41.163 } 00:20:41.163 ]' 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.163 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.420 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:20:41.420 16:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:20:42.355 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.355 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:42.355 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.355 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.355 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.355 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:42.355 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.355 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.355 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.613 16:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:42.872 00:20:42.872 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.872 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.872 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.130 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.130 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.130 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.130 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.130 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.130 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.130 { 00:20:43.130 "cntlid": 25, 00:20:43.130 "qid": 0, 00:20:43.130 "state": "enabled", 00:20:43.130 "thread": "nvmf_tgt_poll_group_000", 00:20:43.130 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:43.130 "listen_address": { 00:20:43.130 "trtype": "TCP", 00:20:43.130 "adrfam": "IPv4", 00:20:43.130 "traddr": "10.0.0.2", 00:20:43.130 "trsvcid": "4420" 00:20:43.130 }, 00:20:43.130 "peer_address": { 00:20:43.130 "trtype": "TCP", 00:20:43.130 "adrfam": "IPv4", 00:20:43.130 "traddr": "10.0.0.1", 00:20:43.130 "trsvcid": "51418" 00:20:43.130 }, 00:20:43.130 "auth": { 00:20:43.130 "state": "completed", 00:20:43.130 "digest": "sha256", 00:20:43.130 "dhgroup": "ffdhe4096" 00:20:43.130 } 00:20:43.130 } 00:20:43.130 ]' 00:20:43.130 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.388 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:43.388 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.388 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:43.388 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.388 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.388 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.388 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.647 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:20:43.647 16:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:20:44.581 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.581 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:44.581 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.581 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.581 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.581 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.581 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.581 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:44.840 16:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.099 00:20:45.099 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.099 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.099 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.357 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.357 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.357 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.357 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.357 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.357 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.357 { 00:20:45.357 "cntlid": 27, 00:20:45.357 "qid": 0, 00:20:45.357 "state": "enabled", 00:20:45.357 "thread": "nvmf_tgt_poll_group_000", 00:20:45.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:45.357 "listen_address": { 00:20:45.357 "trtype": "TCP", 00:20:45.357 "adrfam": "IPv4", 00:20:45.357 "traddr": "10.0.0.2", 00:20:45.357 "trsvcid": "4420" 00:20:45.357 }, 00:20:45.357 "peer_address": { 00:20:45.357 "trtype": "TCP", 00:20:45.357 "adrfam": "IPv4", 00:20:45.357 "traddr": "10.0.0.1", 00:20:45.357 "trsvcid": "51442" 00:20:45.357 }, 00:20:45.357 "auth": { 00:20:45.357 "state": "completed", 00:20:45.357 "digest": "sha256", 00:20:45.357 "dhgroup": "ffdhe4096" 00:20:45.357 } 00:20:45.357 } 00:20:45.357 ]' 00:20:45.357 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.616 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:45.616 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.616 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:45.616 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.616 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.616 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.616 16:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.874 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:20:45.874 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:20:46.816 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.816 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.816 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:46.816 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.816 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.816 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.816 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:46.816 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:46.816 16:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:47.074 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.075 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.333 00:20:47.333 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.333 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.333 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.902 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.902 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.902 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.902 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.902 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.902 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.902 { 00:20:47.902 "cntlid": 29, 00:20:47.902 "qid": 0, 00:20:47.902 "state": "enabled", 00:20:47.902 "thread": "nvmf_tgt_poll_group_000", 00:20:47.902 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:47.902 "listen_address": { 00:20:47.902 "trtype": "TCP", 00:20:47.902 "adrfam": "IPv4", 00:20:47.902 "traddr": "10.0.0.2", 00:20:47.902 "trsvcid": "4420" 00:20:47.902 }, 00:20:47.902 "peer_address": { 00:20:47.902 "trtype": "TCP", 00:20:47.902 "adrfam": "IPv4", 00:20:47.902 "traddr": "10.0.0.1", 00:20:47.902 "trsvcid": "51456" 00:20:47.902 }, 00:20:47.902 "auth": { 00:20:47.902 "state": "completed", 00:20:47.902 "digest": "sha256", 00:20:47.902 "dhgroup": "ffdhe4096" 00:20:47.902 } 00:20:47.902 } 00:20:47.902 ]' 00:20:47.902 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.902 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.902 16:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:47.902 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:47.902 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:47.902 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:47.902 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:47.902 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.161 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:20:48.161 16:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:20:49.099 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.099 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.099 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.099 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.099 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.099 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:49.099 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.099 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.357 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.617 00:20:49.617 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.617 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.617 16:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.876 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.876 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.876 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.876 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.876 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.876 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.876 { 00:20:49.876 "cntlid": 31, 00:20:49.876 "qid": 0, 00:20:49.876 "state": "enabled", 00:20:49.876 "thread": "nvmf_tgt_poll_group_000", 00:20:49.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:49.876 "listen_address": { 00:20:49.876 "trtype": "TCP", 00:20:49.876 "adrfam": "IPv4", 00:20:49.876 "traddr": "10.0.0.2", 00:20:49.876 "trsvcid": "4420" 00:20:49.876 }, 00:20:49.876 "peer_address": { 00:20:49.876 "trtype": "TCP", 00:20:49.876 "adrfam": "IPv4", 00:20:49.876 "traddr": "10.0.0.1", 00:20:49.876 "trsvcid": "54094" 00:20:49.876 }, 00:20:49.876 "auth": { 00:20:49.876 "state": "completed", 00:20:49.876 "digest": "sha256", 00:20:49.876 "dhgroup": "ffdhe4096" 00:20:49.876 } 00:20:49.876 } 00:20:49.876 ]' 00:20:49.876 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:50.134 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:50.134 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:50.134 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.134 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:50.134 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:50.135 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:50.135 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.393 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:20:50.393 16:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:20:51.332 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:51.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:51.332 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.332 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.332 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.332 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.332 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:51.332 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:51.332 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.332 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.591 16:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.161 00:20:52.161 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.161 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.161 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.420 { 00:20:52.420 "cntlid": 33, 00:20:52.420 "qid": 0, 00:20:52.420 "state": "enabled", 00:20:52.420 "thread": "nvmf_tgt_poll_group_000", 00:20:52.420 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:52.420 "listen_address": { 00:20:52.420 "trtype": "TCP", 00:20:52.420 "adrfam": "IPv4", 00:20:52.420 "traddr": "10.0.0.2", 00:20:52.420 "trsvcid": "4420" 00:20:52.420 }, 00:20:52.420 "peer_address": { 00:20:52.420 "trtype": "TCP", 00:20:52.420 "adrfam": "IPv4", 00:20:52.420 "traddr": "10.0.0.1", 00:20:52.420 "trsvcid": "54114" 00:20:52.420 }, 00:20:52.420 "auth": { 00:20:52.420 "state": "completed", 00:20:52.420 "digest": "sha256", 00:20:52.420 "dhgroup": "ffdhe6144" 00:20:52.420 } 00:20:52.420 } 00:20:52.420 ]' 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.420 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.678 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:20:52.678 16:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:20:53.616 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.616 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.616 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:53.616 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.616 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.616 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.617 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.617 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.617 16:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.875 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.441 00:20:54.441 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.441 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.441 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.699 { 00:20:54.699 "cntlid": 35, 00:20:54.699 "qid": 0, 00:20:54.699 "state": "enabled", 00:20:54.699 "thread": "nvmf_tgt_poll_group_000", 00:20:54.699 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:54.699 "listen_address": { 00:20:54.699 "trtype": "TCP", 00:20:54.699 "adrfam": "IPv4", 00:20:54.699 "traddr": "10.0.0.2", 00:20:54.699 "trsvcid": "4420" 00:20:54.699 }, 00:20:54.699 "peer_address": { 00:20:54.699 "trtype": "TCP", 00:20:54.699 "adrfam": "IPv4", 00:20:54.699 "traddr": "10.0.0.1", 00:20:54.699 "trsvcid": "54132" 00:20:54.699 }, 00:20:54.699 "auth": { 00:20:54.699 "state": "completed", 00:20:54.699 "digest": "sha256", 00:20:54.699 "dhgroup": "ffdhe6144" 00:20:54.699 } 00:20:54.699 } 00:20:54.699 ]' 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:54.699 16:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.699 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.700 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.700 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.266 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:20:55.266 16:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.204 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.204 16:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:56.772 00:20:56.772 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:56.772 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:56.772 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.030 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.030 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.030 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.030 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.030 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.030 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.030 { 00:20:57.030 "cntlid": 37, 00:20:57.030 "qid": 0, 00:20:57.030 "state": "enabled", 00:20:57.030 "thread": "nvmf_tgt_poll_group_000", 00:20:57.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:57.030 "listen_address": { 00:20:57.030 "trtype": "TCP", 00:20:57.030 "adrfam": "IPv4", 00:20:57.030 "traddr": "10.0.0.2", 00:20:57.030 "trsvcid": "4420" 00:20:57.030 }, 00:20:57.030 "peer_address": { 00:20:57.030 "trtype": "TCP", 00:20:57.030 "adrfam": "IPv4", 00:20:57.030 "traddr": "10.0.0.1", 00:20:57.030 "trsvcid": "54164" 00:20:57.030 }, 00:20:57.030 "auth": { 00:20:57.030 "state": "completed", 00:20:57.030 "digest": "sha256", 00:20:57.030 "dhgroup": "ffdhe6144" 00:20:57.030 } 00:20:57.030 } 00:20:57.030 ]' 00:20:57.030 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.030 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:57.030 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.289 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:57.289 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.289 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.289 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.289 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.548 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:20:57.548 16:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:20:58.489 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.489 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.489 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:58.489 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.489 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.489 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.489 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.489 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:58.489 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:58.748 16:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:59.335 00:20:59.335 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.335 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.335 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.335 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.335 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.335 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.335 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.335 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.335 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.335 { 00:20:59.335 "cntlid": 39, 00:20:59.335 "qid": 0, 00:20:59.335 "state": "enabled", 00:20:59.335 "thread": "nvmf_tgt_poll_group_000", 00:20:59.335 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:20:59.335 "listen_address": { 00:20:59.335 "trtype": "TCP", 00:20:59.335 "adrfam": "IPv4", 00:20:59.335 "traddr": "10.0.0.2", 00:20:59.335 "trsvcid": "4420" 00:20:59.335 }, 00:20:59.335 "peer_address": { 00:20:59.335 "trtype": "TCP", 00:20:59.335 "adrfam": "IPv4", 00:20:59.335 "traddr": "10.0.0.1", 00:20:59.335 "trsvcid": "54192" 00:20:59.335 }, 00:20:59.335 "auth": { 00:20:59.335 "state": "completed", 00:20:59.335 "digest": "sha256", 00:20:59.335 "dhgroup": "ffdhe6144" 00:20:59.335 } 00:20:59.335 } 00:20:59.335 ]' 00:20:59.335 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.594 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:59.594 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.594 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.594 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.594 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.594 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.594 16:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.852 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:20:59.852 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:00.792 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.792 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.792 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:00.792 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.792 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.792 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.792 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:00.792 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.792 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:00.792 16:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.051 16:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:01.993 00:21:01.993 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:01.993 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:01.993 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.993 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.993 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:01.993 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.993 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.993 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.993 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:01.993 { 00:21:01.993 "cntlid": 41, 00:21:01.993 "qid": 0, 00:21:01.993 "state": "enabled", 00:21:01.993 "thread": "nvmf_tgt_poll_group_000", 00:21:01.993 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:01.993 "listen_address": { 00:21:01.993 "trtype": "TCP", 00:21:01.993 "adrfam": "IPv4", 00:21:01.993 "traddr": "10.0.0.2", 00:21:01.993 "trsvcid": "4420" 00:21:01.993 }, 00:21:01.993 "peer_address": { 00:21:01.993 "trtype": "TCP", 00:21:01.993 "adrfam": "IPv4", 00:21:01.994 "traddr": "10.0.0.1", 00:21:01.994 "trsvcid": "49260" 00:21:01.994 }, 00:21:01.994 "auth": { 00:21:01.994 "state": "completed", 00:21:01.994 "digest": "sha256", 00:21:01.994 "dhgroup": "ffdhe8192" 00:21:01.994 } 00:21:01.994 } 00:21:01.994 ]' 00:21:02.253 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.253 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:02.253 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.253 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:02.253 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.253 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.253 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.253 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.511 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:02.511 16:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:03.450 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.450 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.450 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.450 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.450 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.450 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.450 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.450 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:03.450 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:03.708 16:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:04.651 00:21:04.651 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.651 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.651 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.651 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.651 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.651 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.651 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.910 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.910 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.910 { 00:21:04.910 "cntlid": 43, 00:21:04.910 "qid": 0, 00:21:04.910 "state": "enabled", 00:21:04.910 "thread": "nvmf_tgt_poll_group_000", 00:21:04.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:04.910 "listen_address": { 00:21:04.910 "trtype": "TCP", 00:21:04.910 "adrfam": "IPv4", 00:21:04.910 "traddr": "10.0.0.2", 00:21:04.910 "trsvcid": "4420" 00:21:04.910 }, 00:21:04.910 "peer_address": { 00:21:04.910 "trtype": "TCP", 00:21:04.910 "adrfam": "IPv4", 00:21:04.910 "traddr": "10.0.0.1", 00:21:04.910 "trsvcid": "49280" 00:21:04.910 }, 00:21:04.910 "auth": { 00:21:04.910 "state": "completed", 00:21:04.910 "digest": "sha256", 00:21:04.910 "dhgroup": "ffdhe8192" 00:21:04.910 } 00:21:04.910 } 00:21:04.910 ]' 00:21:04.910 16:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.910 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:04.910 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.910 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:04.910 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.910 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.911 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.911 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.169 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:05.169 16:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:06.111 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.112 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.112 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.112 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.112 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.112 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.112 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:06.112 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.370 16:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:07.329 00:21:07.329 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.329 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.329 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.329 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.329 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.329 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.329 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.329 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.329 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.329 { 00:21:07.329 "cntlid": 45, 00:21:07.329 "qid": 0, 00:21:07.329 "state": "enabled", 00:21:07.329 "thread": "nvmf_tgt_poll_group_000", 00:21:07.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:07.329 "listen_address": { 00:21:07.329 "trtype": "TCP", 00:21:07.329 "adrfam": "IPv4", 00:21:07.330 "traddr": "10.0.0.2", 00:21:07.330 "trsvcid": "4420" 00:21:07.330 }, 00:21:07.330 "peer_address": { 00:21:07.330 "trtype": "TCP", 00:21:07.330 "adrfam": "IPv4", 00:21:07.330 "traddr": "10.0.0.1", 00:21:07.330 "trsvcid": "49314" 00:21:07.330 }, 00:21:07.330 "auth": { 00:21:07.330 "state": "completed", 00:21:07.330 "digest": "sha256", 00:21:07.330 "dhgroup": "ffdhe8192" 00:21:07.330 } 00:21:07.330 } 00:21:07.330 ]' 00:21:07.330 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.595 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:07.595 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.595 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:07.595 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.595 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.595 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.595 16:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.853 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:07.854 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:08.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:08.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:08.791 16:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.051 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:09.989 00:21:09.989 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.989 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.989 16:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.989 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.989 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.989 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.989 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.989 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.989 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.989 { 00:21:09.989 "cntlid": 47, 00:21:09.989 "qid": 0, 00:21:09.989 "state": "enabled", 00:21:09.989 "thread": "nvmf_tgt_poll_group_000", 00:21:09.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:09.989 "listen_address": { 00:21:09.989 "trtype": "TCP", 00:21:09.989 "adrfam": "IPv4", 00:21:09.989 "traddr": "10.0.0.2", 00:21:09.989 "trsvcid": "4420" 00:21:09.989 }, 00:21:09.989 "peer_address": { 00:21:09.989 "trtype": "TCP", 00:21:09.989 "adrfam": "IPv4", 00:21:09.989 "traddr": "10.0.0.1", 00:21:09.989 "trsvcid": "47050" 00:21:09.989 }, 00:21:09.989 "auth": { 00:21:09.989 "state": "completed", 00:21:09.989 "digest": "sha256", 00:21:09.989 "dhgroup": "ffdhe8192" 00:21:09.989 } 00:21:09.989 } 00:21:09.989 ]' 00:21:09.989 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.247 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:21:10.247 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.247 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:10.247 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.247 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.247 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.247 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.506 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:10.506 16:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:11.440 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.440 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.440 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:11.440 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.440 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.440 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.440 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:11.440 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:11.440 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.440 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:11.440 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.698 16:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:11.957 00:21:11.957 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.957 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.957 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.215 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:12.215 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:12.215 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.215 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.215 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.215 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:12.215 { 00:21:12.215 "cntlid": 49, 00:21:12.215 "qid": 0, 00:21:12.215 "state": "enabled", 00:21:12.215 "thread": "nvmf_tgt_poll_group_000", 00:21:12.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:12.215 "listen_address": { 00:21:12.215 "trtype": "TCP", 00:21:12.215 "adrfam": "IPv4", 00:21:12.215 "traddr": "10.0.0.2", 00:21:12.215 "trsvcid": "4420" 00:21:12.215 }, 00:21:12.215 "peer_address": { 00:21:12.215 "trtype": "TCP", 00:21:12.215 "adrfam": "IPv4", 00:21:12.215 "traddr": "10.0.0.1", 00:21:12.215 "trsvcid": "47082" 00:21:12.215 }, 00:21:12.215 "auth": { 00:21:12.215 "state": "completed", 00:21:12.215 "digest": "sha384", 00:21:12.215 "dhgroup": "null" 00:21:12.215 } 00:21:12.215 } 00:21:12.215 ]' 00:21:12.215 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:12.215 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:12.215 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:12.215 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:12.474 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:12.474 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:12.474 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:12.474 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.732 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:12.732 16:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:13.672 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:13.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:13.672 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:13.672 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.672 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.672 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.672 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:13.672 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:13.672 16:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:13.932 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.190 00:21:14.190 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.190 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.190 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.450 { 00:21:14.450 "cntlid": 51, 00:21:14.450 "qid": 0, 00:21:14.450 "state": "enabled", 00:21:14.450 "thread": "nvmf_tgt_poll_group_000", 00:21:14.450 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:14.450 "listen_address": { 00:21:14.450 "trtype": "TCP", 00:21:14.450 "adrfam": "IPv4", 00:21:14.450 "traddr": "10.0.0.2", 00:21:14.450 "trsvcid": "4420" 00:21:14.450 }, 00:21:14.450 "peer_address": { 00:21:14.450 "trtype": "TCP", 00:21:14.450 "adrfam": "IPv4", 00:21:14.450 "traddr": "10.0.0.1", 00:21:14.450 "trsvcid": "47102" 00:21:14.450 }, 00:21:14.450 "auth": { 00:21:14.450 "state": "completed", 00:21:14.450 "digest": "sha384", 00:21:14.450 "dhgroup": "null" 00:21:14.450 } 00:21:14.450 } 00:21:14.450 ]' 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:14.450 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.709 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.709 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.709 16:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.968 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:14.968 16:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:15.908 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.908 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.908 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.908 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.908 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.908 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.908 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:15.908 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.167 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.426 00:21:16.426 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.426 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.426 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.685 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.685 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.685 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.685 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.685 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.685 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.685 { 00:21:16.685 "cntlid": 53, 00:21:16.685 "qid": 0, 00:21:16.685 "state": "enabled", 00:21:16.685 "thread": "nvmf_tgt_poll_group_000", 00:21:16.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:16.685 "listen_address": { 00:21:16.685 "trtype": "TCP", 00:21:16.685 "adrfam": "IPv4", 00:21:16.685 "traddr": "10.0.0.2", 00:21:16.685 "trsvcid": "4420" 00:21:16.685 }, 00:21:16.685 "peer_address": { 00:21:16.685 "trtype": "TCP", 00:21:16.685 "adrfam": "IPv4", 00:21:16.685 "traddr": "10.0.0.1", 00:21:16.685 "trsvcid": "47124" 00:21:16.685 }, 00:21:16.685 "auth": { 00:21:16.685 "state": "completed", 00:21:16.685 "digest": "sha384", 00:21:16.685 "dhgroup": "null" 00:21:16.685 } 00:21:16.685 } 00:21:16.685 ]' 00:21:16.685 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.685 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.685 16:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.943 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:16.943 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.943 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.943 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.943 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:17.200 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:17.200 16:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:18.136 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.136 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:18.136 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.136 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.136 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.136 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.136 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:18.136 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.395 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:18.653 00:21:18.653 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.653 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.653 16:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.912 { 00:21:18.912 "cntlid": 55, 00:21:18.912 "qid": 0, 00:21:18.912 "state": "enabled", 00:21:18.912 "thread": "nvmf_tgt_poll_group_000", 00:21:18.912 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:18.912 "listen_address": { 00:21:18.912 "trtype": "TCP", 00:21:18.912 "adrfam": "IPv4", 00:21:18.912 "traddr": "10.0.0.2", 00:21:18.912 "trsvcid": "4420" 00:21:18.912 }, 00:21:18.912 "peer_address": { 00:21:18.912 "trtype": "TCP", 00:21:18.912 "adrfam": "IPv4", 00:21:18.912 "traddr": "10.0.0.1", 00:21:18.912 "trsvcid": "47148" 00:21:18.912 }, 00:21:18.912 "auth": { 00:21:18.912 "state": "completed", 00:21:18.912 "digest": "sha384", 00:21:18.912 "dhgroup": "null" 00:21:18.912 } 00:21:18.912 } 00:21:18.912 ]' 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:18.912 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.170 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.170 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.170 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.429 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:19.429 16:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:20.367 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.367 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.367 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.367 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.367 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.367 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:20.367 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.367 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:20.367 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.626 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.627 16:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:20.886 00:21:20.886 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.886 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.886 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.145 { 00:21:21.145 "cntlid": 57, 00:21:21.145 "qid": 0, 00:21:21.145 "state": "enabled", 00:21:21.145 "thread": "nvmf_tgt_poll_group_000", 00:21:21.145 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:21.145 "listen_address": { 00:21:21.145 "trtype": "TCP", 00:21:21.145 "adrfam": "IPv4", 00:21:21.145 "traddr": "10.0.0.2", 00:21:21.145 "trsvcid": "4420" 00:21:21.145 }, 00:21:21.145 "peer_address": { 00:21:21.145 "trtype": "TCP", 00:21:21.145 "adrfam": "IPv4", 00:21:21.145 "traddr": "10.0.0.1", 00:21:21.145 "trsvcid": "34088" 00:21:21.145 }, 00:21:21.145 "auth": { 00:21:21.145 "state": "completed", 00:21:21.145 "digest": "sha384", 00:21:21.145 "dhgroup": "ffdhe2048" 00:21:21.145 } 00:21:21.145 } 00:21:21.145 ]' 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:21.145 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.404 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.404 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.404 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.663 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:21.663 16:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:22.604 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.604 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:22.604 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.604 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.604 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.604 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.604 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.604 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:22.863 16:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:23.122 00:21:23.122 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.122 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.122 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.380 { 00:21:23.380 "cntlid": 59, 00:21:23.380 "qid": 0, 00:21:23.380 "state": "enabled", 00:21:23.380 "thread": "nvmf_tgt_poll_group_000", 00:21:23.380 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:23.380 "listen_address": { 00:21:23.380 "trtype": "TCP", 00:21:23.380 "adrfam": "IPv4", 00:21:23.380 "traddr": "10.0.0.2", 00:21:23.380 "trsvcid": "4420" 00:21:23.380 }, 00:21:23.380 "peer_address": { 00:21:23.380 "trtype": "TCP", 00:21:23.380 "adrfam": "IPv4", 00:21:23.380 "traddr": "10.0.0.1", 00:21:23.380 "trsvcid": "34118" 00:21:23.380 }, 00:21:23.380 "auth": { 00:21:23.380 "state": "completed", 00:21:23.380 "digest": "sha384", 00:21:23.380 "dhgroup": "ffdhe2048" 00:21:23.380 } 00:21:23.380 } 00:21:23.380 ]' 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:23.380 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.638 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.638 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.638 16:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.897 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:23.897 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:24.836 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.836 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:24.836 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.836 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.836 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.836 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.836 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.836 16:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:24.836 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:25.406 00:21:25.406 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.406 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.406 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.665 { 00:21:25.665 "cntlid": 61, 00:21:25.665 "qid": 0, 00:21:25.665 "state": "enabled", 00:21:25.665 "thread": "nvmf_tgt_poll_group_000", 00:21:25.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:25.665 "listen_address": { 00:21:25.665 "trtype": "TCP", 00:21:25.665 "adrfam": "IPv4", 00:21:25.665 "traddr": "10.0.0.2", 00:21:25.665 "trsvcid": "4420" 00:21:25.665 }, 00:21:25.665 "peer_address": { 00:21:25.665 "trtype": "TCP", 00:21:25.665 "adrfam": "IPv4", 00:21:25.665 "traddr": "10.0.0.1", 00:21:25.665 "trsvcid": "34156" 00:21:25.665 }, 00:21:25.665 "auth": { 00:21:25.665 "state": "completed", 00:21:25.665 "digest": "sha384", 00:21:25.665 "dhgroup": "ffdhe2048" 00:21:25.665 } 00:21:25.665 } 00:21:25.665 ]' 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.665 16:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.924 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:25.925 16:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:26.865 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.865 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.865 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.865 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.865 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.865 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.865 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:26.865 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.124 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:27.382 00:21:27.382 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.382 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.382 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.641 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.641 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.641 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.641 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.641 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.641 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.641 { 00:21:27.641 "cntlid": 63, 00:21:27.641 "qid": 0, 00:21:27.641 "state": "enabled", 00:21:27.641 "thread": "nvmf_tgt_poll_group_000", 00:21:27.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:27.641 "listen_address": { 00:21:27.641 "trtype": "TCP", 00:21:27.641 "adrfam": "IPv4", 00:21:27.641 "traddr": "10.0.0.2", 00:21:27.641 "trsvcid": "4420" 00:21:27.641 }, 00:21:27.641 "peer_address": { 00:21:27.641 "trtype": "TCP", 00:21:27.641 "adrfam": "IPv4", 00:21:27.641 "traddr": "10.0.0.1", 00:21:27.641 "trsvcid": "34178" 00:21:27.641 }, 00:21:27.641 "auth": { 00:21:27.641 "state": "completed", 00:21:27.641 "digest": "sha384", 00:21:27.641 "dhgroup": "ffdhe2048" 00:21:27.641 } 00:21:27.641 } 00:21:27.641 ]' 00:21:27.641 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.899 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.899 16:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.899 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:27.899 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.899 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.899 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.899 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:28.158 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:28.158 16:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:29.092 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.092 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:29.092 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.092 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.092 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.092 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.092 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.092 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:29.092 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.350 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.609 00:21:29.609 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:29.609 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:29.609 16:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.869 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.869 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.869 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.869 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.869 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.869 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:29.869 { 00:21:29.869 "cntlid": 65, 00:21:29.869 "qid": 0, 00:21:29.869 "state": "enabled", 00:21:29.869 "thread": "nvmf_tgt_poll_group_000", 00:21:29.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:29.869 "listen_address": { 00:21:29.869 "trtype": "TCP", 00:21:29.869 "adrfam": "IPv4", 00:21:29.869 "traddr": "10.0.0.2", 00:21:29.869 "trsvcid": "4420" 00:21:29.869 }, 00:21:29.869 "peer_address": { 00:21:29.869 "trtype": "TCP", 00:21:29.869 "adrfam": "IPv4", 00:21:29.869 "traddr": "10.0.0.1", 00:21:29.869 "trsvcid": "38654" 00:21:29.869 }, 00:21:29.869 "auth": { 00:21:29.869 "state": "completed", 00:21:29.869 "digest": "sha384", 00:21:29.869 "dhgroup": "ffdhe3072" 00:21:29.869 } 00:21:29.869 } 00:21:29.869 ]' 00:21:29.869 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:29.869 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:29.869 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.127 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:30.127 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.127 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.127 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.127 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.384 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:30.384 16:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:31.319 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.319 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:31.319 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.319 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.319 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.319 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.319 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:31.319 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.577 16:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.835 00:21:31.835 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.835 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.835 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:32.093 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:32.093 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:32.093 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.093 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.093 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.093 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:32.093 { 00:21:32.093 "cntlid": 67, 00:21:32.093 "qid": 0, 00:21:32.093 "state": "enabled", 00:21:32.093 "thread": "nvmf_tgt_poll_group_000", 00:21:32.093 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:32.093 "listen_address": { 00:21:32.093 "trtype": "TCP", 00:21:32.093 "adrfam": "IPv4", 00:21:32.093 "traddr": "10.0.0.2", 00:21:32.093 "trsvcid": "4420" 00:21:32.093 }, 00:21:32.093 "peer_address": { 00:21:32.093 "trtype": "TCP", 00:21:32.093 "adrfam": "IPv4", 00:21:32.093 "traddr": "10.0.0.1", 00:21:32.093 "trsvcid": "38670" 00:21:32.093 }, 00:21:32.093 "auth": { 00:21:32.093 "state": "completed", 00:21:32.093 "digest": "sha384", 00:21:32.093 "dhgroup": "ffdhe3072" 00:21:32.093 } 00:21:32.093 } 00:21:32.093 ]' 00:21:32.093 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.093 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:32.093 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.352 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:32.352 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.352 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.352 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.352 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.622 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:32.622 16:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:33.314 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:33.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:33.314 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:33.314 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.314 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.314 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.314 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:33.314 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:33.314 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.616 16:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.224 00:21:34.224 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.224 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.224 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.224 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.224 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.224 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.224 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.512 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.512 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.512 { 00:21:34.512 "cntlid": 69, 00:21:34.512 "qid": 0, 00:21:34.512 "state": "enabled", 00:21:34.512 "thread": "nvmf_tgt_poll_group_000", 00:21:34.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:34.512 "listen_address": { 00:21:34.512 "trtype": "TCP", 00:21:34.512 "adrfam": "IPv4", 00:21:34.512 "traddr": "10.0.0.2", 00:21:34.512 "trsvcid": "4420" 00:21:34.512 }, 00:21:34.512 "peer_address": { 00:21:34.512 "trtype": "TCP", 00:21:34.512 "adrfam": "IPv4", 00:21:34.512 "traddr": "10.0.0.1", 00:21:34.512 "trsvcid": "38680" 00:21:34.512 }, 00:21:34.512 "auth": { 00:21:34.513 "state": "completed", 00:21:34.513 "digest": "sha384", 00:21:34.513 "dhgroup": "ffdhe3072" 00:21:34.513 } 00:21:34.513 } 00:21:34.513 ]' 00:21:34.513 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:34.513 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:34.513 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:34.513 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:34.513 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:34.513 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:34.513 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:34.513 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.808 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:34.808 16:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:35.768 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.768 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.768 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.768 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.768 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.768 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.768 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:35.768 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:35.768 16:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.026 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.286 00:21:36.545 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.545 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.546 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.803 { 00:21:36.803 "cntlid": 71, 00:21:36.803 "qid": 0, 00:21:36.803 "state": "enabled", 00:21:36.803 "thread": "nvmf_tgt_poll_group_000", 00:21:36.803 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:36.803 "listen_address": { 00:21:36.803 "trtype": "TCP", 00:21:36.803 "adrfam": "IPv4", 00:21:36.803 "traddr": "10.0.0.2", 00:21:36.803 "trsvcid": "4420" 00:21:36.803 }, 00:21:36.803 "peer_address": { 00:21:36.803 "trtype": "TCP", 00:21:36.803 "adrfam": "IPv4", 00:21:36.803 "traddr": "10.0.0.1", 00:21:36.803 "trsvcid": "38706" 00:21:36.803 }, 00:21:36.803 "auth": { 00:21:36.803 "state": "completed", 00:21:36.803 "digest": "sha384", 00:21:36.803 "dhgroup": "ffdhe3072" 00:21:36.803 } 00:21:36.803 } 00:21:36.803 ]' 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.803 16:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.803 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.803 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.803 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.060 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:37.060 16:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:37.996 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.996 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.996 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.996 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.996 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.996 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.996 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.996 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:37.996 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.255 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.513 00:21:38.513 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.513 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.513 16:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.772 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.772 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.029 { 00:21:39.029 "cntlid": 73, 00:21:39.029 "qid": 0, 00:21:39.029 "state": "enabled", 00:21:39.029 "thread": "nvmf_tgt_poll_group_000", 00:21:39.029 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:39.029 "listen_address": { 00:21:39.029 "trtype": "TCP", 00:21:39.029 "adrfam": "IPv4", 00:21:39.029 "traddr": "10.0.0.2", 00:21:39.029 "trsvcid": "4420" 00:21:39.029 }, 00:21:39.029 "peer_address": { 00:21:39.029 "trtype": "TCP", 00:21:39.029 "adrfam": "IPv4", 00:21:39.029 "traddr": "10.0.0.1", 00:21:39.029 "trsvcid": "38726" 00:21:39.029 }, 00:21:39.029 "auth": { 00:21:39.029 "state": "completed", 00:21:39.029 "digest": "sha384", 00:21:39.029 "dhgroup": "ffdhe4096" 00:21:39.029 } 00:21:39.029 } 00:21:39.029 ]' 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:39.029 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:39.286 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:39.286 16:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:40.218 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.218 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:40.218 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.218 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.218 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.218 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.218 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:40.218 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:40.476 16:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:41.041 00:21:41.041 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.041 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.041 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.299 { 00:21:41.299 "cntlid": 75, 00:21:41.299 "qid": 0, 00:21:41.299 "state": "enabled", 00:21:41.299 "thread": "nvmf_tgt_poll_group_000", 00:21:41.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:41.299 "listen_address": { 00:21:41.299 "trtype": "TCP", 00:21:41.299 "adrfam": "IPv4", 00:21:41.299 "traddr": "10.0.0.2", 00:21:41.299 "trsvcid": "4420" 00:21:41.299 }, 00:21:41.299 "peer_address": { 00:21:41.299 "trtype": "TCP", 00:21:41.299 "adrfam": "IPv4", 00:21:41.299 "traddr": "10.0.0.1", 00:21:41.299 "trsvcid": "41830" 00:21:41.299 }, 00:21:41.299 "auth": { 00:21:41.299 "state": "completed", 00:21:41.299 "digest": "sha384", 00:21:41.299 "dhgroup": "ffdhe4096" 00:21:41.299 } 00:21:41.299 } 00:21:41.299 ]' 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.299 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.557 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:41.557 16:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:42.490 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.490 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:42.490 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.490 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.490 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.490 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.490 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:42.490 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:42.747 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:42.747 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.747 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:42.747 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:42.747 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:42.748 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.748 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.748 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.748 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.748 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.748 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.748 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:42.748 16:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:43.313 00:21:43.313 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.313 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.313 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.570 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.570 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.571 { 00:21:43.571 "cntlid": 77, 00:21:43.571 "qid": 0, 00:21:43.571 "state": "enabled", 00:21:43.571 "thread": "nvmf_tgt_poll_group_000", 00:21:43.571 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:43.571 "listen_address": { 00:21:43.571 "trtype": "TCP", 00:21:43.571 "adrfam": "IPv4", 00:21:43.571 "traddr": "10.0.0.2", 00:21:43.571 "trsvcid": "4420" 00:21:43.571 }, 00:21:43.571 "peer_address": { 00:21:43.571 "trtype": "TCP", 00:21:43.571 "adrfam": "IPv4", 00:21:43.571 "traddr": "10.0.0.1", 00:21:43.571 "trsvcid": "41852" 00:21:43.571 }, 00:21:43.571 "auth": { 00:21:43.571 "state": "completed", 00:21:43.571 "digest": "sha384", 00:21:43.571 "dhgroup": "ffdhe4096" 00:21:43.571 } 00:21:43.571 } 00:21:43.571 ]' 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.571 16:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.828 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:43.829 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:44.760 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.760 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.760 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.760 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.760 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.760 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.760 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.760 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:44.760 16:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:45.018 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:45.018 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.018 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:45.019 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:45.019 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:45.019 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.019 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:45.019 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.019 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.019 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.019 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:45.019 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.019 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:45.276 00:21:45.534 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.534 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.534 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.791 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.791 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.791 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.791 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.792 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.792 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.792 { 00:21:45.792 "cntlid": 79, 00:21:45.792 "qid": 0, 00:21:45.792 "state": "enabled", 00:21:45.792 "thread": "nvmf_tgt_poll_group_000", 00:21:45.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:45.792 "listen_address": { 00:21:45.792 "trtype": "TCP", 00:21:45.792 "adrfam": "IPv4", 00:21:45.792 "traddr": "10.0.0.2", 00:21:45.792 "trsvcid": "4420" 00:21:45.792 }, 00:21:45.792 "peer_address": { 00:21:45.792 "trtype": "TCP", 00:21:45.792 "adrfam": "IPv4", 00:21:45.792 "traddr": "10.0.0.1", 00:21:45.792 "trsvcid": "41880" 00:21:45.792 }, 00:21:45.792 "auth": { 00:21:45.792 "state": "completed", 00:21:45.792 "digest": "sha384", 00:21:45.792 "dhgroup": "ffdhe4096" 00:21:45.792 } 00:21:45.792 } 00:21:45.792 ]' 00:21:45.792 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.792 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:45.792 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.792 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:45.792 16:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.792 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.792 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.792 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.049 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:46.049 16:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:46.983 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.983 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.983 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.983 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.983 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.983 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.983 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:46.983 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.983 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:46.983 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.242 16:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:47.807 00:21:47.807 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.807 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.807 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.065 { 00:21:48.065 "cntlid": 81, 00:21:48.065 "qid": 0, 00:21:48.065 "state": "enabled", 00:21:48.065 "thread": "nvmf_tgt_poll_group_000", 00:21:48.065 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:48.065 "listen_address": { 00:21:48.065 "trtype": "TCP", 00:21:48.065 "adrfam": "IPv4", 00:21:48.065 "traddr": "10.0.0.2", 00:21:48.065 "trsvcid": "4420" 00:21:48.065 }, 00:21:48.065 "peer_address": { 00:21:48.065 "trtype": "TCP", 00:21:48.065 "adrfam": "IPv4", 00:21:48.065 "traddr": "10.0.0.1", 00:21:48.065 "trsvcid": "41912" 00:21:48.065 }, 00:21:48.065 "auth": { 00:21:48.065 "state": "completed", 00:21:48.065 "digest": "sha384", 00:21:48.065 "dhgroup": "ffdhe6144" 00:21:48.065 } 00:21:48.065 } 00:21:48.065 ]' 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.065 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.630 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:48.630 16:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:49.562 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.562 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:49.562 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.562 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.562 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.562 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.562 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:49.562 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:49.821 16:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.386 00:21:50.386 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.386 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.386 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.644 { 00:21:50.644 "cntlid": 83, 00:21:50.644 "qid": 0, 00:21:50.644 "state": "enabled", 00:21:50.644 "thread": "nvmf_tgt_poll_group_000", 00:21:50.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:50.644 "listen_address": { 00:21:50.644 "trtype": "TCP", 00:21:50.644 "adrfam": "IPv4", 00:21:50.644 "traddr": "10.0.0.2", 00:21:50.644 "trsvcid": "4420" 00:21:50.644 }, 00:21:50.644 "peer_address": { 00:21:50.644 "trtype": "TCP", 00:21:50.644 "adrfam": "IPv4", 00:21:50.644 "traddr": "10.0.0.1", 00:21:50.644 "trsvcid": "46750" 00:21:50.644 }, 00:21:50.644 "auth": { 00:21:50.644 "state": "completed", 00:21:50.644 "digest": "sha384", 00:21:50.644 "dhgroup": "ffdhe6144" 00:21:50.644 } 00:21:50.644 } 00:21:50.644 ]' 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.644 16:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.902 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:50.902 16:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:21:51.835 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.835 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.835 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.835 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.835 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.835 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.835 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:51.835 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:52.092 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:52.092 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:52.092 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:52.092 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:52.092 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:52.093 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:52.093 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.093 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.093 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.350 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.350 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.350 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.350 16:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.915 00:21:52.915 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.915 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.915 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.174 { 00:21:53.174 "cntlid": 85, 00:21:53.174 "qid": 0, 00:21:53.174 "state": "enabled", 00:21:53.174 "thread": "nvmf_tgt_poll_group_000", 00:21:53.174 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:53.174 "listen_address": { 00:21:53.174 "trtype": "TCP", 00:21:53.174 "adrfam": "IPv4", 00:21:53.174 "traddr": "10.0.0.2", 00:21:53.174 "trsvcid": "4420" 00:21:53.174 }, 00:21:53.174 "peer_address": { 00:21:53.174 "trtype": "TCP", 00:21:53.174 "adrfam": "IPv4", 00:21:53.174 "traddr": "10.0.0.1", 00:21:53.174 "trsvcid": "46772" 00:21:53.174 }, 00:21:53.174 "auth": { 00:21:53.174 "state": "completed", 00:21:53.174 "digest": "sha384", 00:21:53.174 "dhgroup": "ffdhe6144" 00:21:53.174 } 00:21:53.174 } 00:21:53.174 ]' 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:53.174 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.432 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:53.432 16:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:21:54.366 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.366 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.366 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.366 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.366 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.366 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.366 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:54.366 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:54.623 16:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.188 00:21:55.188 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.188 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.188 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.446 { 00:21:55.446 "cntlid": 87, 00:21:55.446 "qid": 0, 00:21:55.446 "state": "enabled", 00:21:55.446 "thread": "nvmf_tgt_poll_group_000", 00:21:55.446 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:55.446 "listen_address": { 00:21:55.446 "trtype": "TCP", 00:21:55.446 "adrfam": "IPv4", 00:21:55.446 "traddr": "10.0.0.2", 00:21:55.446 "trsvcid": "4420" 00:21:55.446 }, 00:21:55.446 "peer_address": { 00:21:55.446 "trtype": "TCP", 00:21:55.446 "adrfam": "IPv4", 00:21:55.446 "traddr": "10.0.0.1", 00:21:55.446 "trsvcid": "46804" 00:21:55.446 }, 00:21:55.446 "auth": { 00:21:55.446 "state": "completed", 00:21:55.446 "digest": "sha384", 00:21:55.446 "dhgroup": "ffdhe6144" 00:21:55.446 } 00:21:55.446 } 00:21:55.446 ]' 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:55.446 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.703 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.703 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.703 16:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.961 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:55.961 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:21:56.895 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.895 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:56.895 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.895 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.895 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.895 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:56.895 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.895 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:56.895 16:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:56.895 16:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:57.827 00:21:57.827 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.827 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.827 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.086 { 00:21:58.086 "cntlid": 89, 00:21:58.086 "qid": 0, 00:21:58.086 "state": "enabled", 00:21:58.086 "thread": "nvmf_tgt_poll_group_000", 00:21:58.086 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:21:58.086 "listen_address": { 00:21:58.086 "trtype": "TCP", 00:21:58.086 "adrfam": "IPv4", 00:21:58.086 "traddr": "10.0.0.2", 00:21:58.086 "trsvcid": "4420" 00:21:58.086 }, 00:21:58.086 "peer_address": { 00:21:58.086 "trtype": "TCP", 00:21:58.086 "adrfam": "IPv4", 00:21:58.086 "traddr": "10.0.0.1", 00:21:58.086 "trsvcid": "46842" 00:21:58.086 }, 00:21:58.086 "auth": { 00:21:58.086 "state": "completed", 00:21:58.086 "digest": "sha384", 00:21:58.086 "dhgroup": "ffdhe8192" 00:21:58.086 } 00:21:58.086 } 00:21:58.086 ]' 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:58.086 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:58.343 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:58.343 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:58.343 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:58.601 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:58.601 16:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.534 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.792 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.792 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.792 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:59.792 16:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:00.357 00:22:00.615 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.615 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.615 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.872 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.872 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.872 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.872 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.872 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.872 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.872 { 00:22:00.872 "cntlid": 91, 00:22:00.872 "qid": 0, 00:22:00.872 "state": "enabled", 00:22:00.872 "thread": "nvmf_tgt_poll_group_000", 00:22:00.872 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:00.872 "listen_address": { 00:22:00.872 "trtype": "TCP", 00:22:00.873 "adrfam": "IPv4", 00:22:00.873 "traddr": "10.0.0.2", 00:22:00.873 "trsvcid": "4420" 00:22:00.873 }, 00:22:00.873 "peer_address": { 00:22:00.873 "trtype": "TCP", 00:22:00.873 "adrfam": "IPv4", 00:22:00.873 "traddr": "10.0.0.1", 00:22:00.873 "trsvcid": "59486" 00:22:00.873 }, 00:22:00.873 "auth": { 00:22:00.873 "state": "completed", 00:22:00.873 "digest": "sha384", 00:22:00.873 "dhgroup": "ffdhe8192" 00:22:00.873 } 00:22:00.873 } 00:22:00.873 ]' 00:22:00.873 16:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.873 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:00.873 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.873 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:00.873 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.873 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.873 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.873 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.130 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:01.131 16:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:02.063 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:02.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:02.063 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:02.063 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.063 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.063 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.063 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:02.063 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:02.063 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:02.320 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:22:02.320 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:02.320 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:02.320 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:02.320 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:02.321 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:02.321 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.321 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.321 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.321 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.321 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.321 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:02.321 16:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:03.253 00:22:03.253 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:03.253 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:03.253 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:03.512 { 00:22:03.512 "cntlid": 93, 00:22:03.512 "qid": 0, 00:22:03.512 "state": "enabled", 00:22:03.512 "thread": "nvmf_tgt_poll_group_000", 00:22:03.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:03.512 "listen_address": { 00:22:03.512 "trtype": "TCP", 00:22:03.512 "adrfam": "IPv4", 00:22:03.512 "traddr": "10.0.0.2", 00:22:03.512 "trsvcid": "4420" 00:22:03.512 }, 00:22:03.512 "peer_address": { 00:22:03.512 "trtype": "TCP", 00:22:03.512 "adrfam": "IPv4", 00:22:03.512 "traddr": "10.0.0.1", 00:22:03.512 "trsvcid": "59508" 00:22:03.512 }, 00:22:03.512 "auth": { 00:22:03.512 "state": "completed", 00:22:03.512 "digest": "sha384", 00:22:03.512 "dhgroup": "ffdhe8192" 00:22:03.512 } 00:22:03.512 } 00:22:03.512 ]' 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.512 16:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.770 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:03.770 16:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:04.702 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:04.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:04.702 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:04.702 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.702 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.702 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.702 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:04.702 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:04.702 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.267 16:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:05.832 00:22:05.832 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.832 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.832 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:06.397 { 00:22:06.397 "cntlid": 95, 00:22:06.397 "qid": 0, 00:22:06.397 "state": "enabled", 00:22:06.397 "thread": "nvmf_tgt_poll_group_000", 00:22:06.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:06.397 "listen_address": { 00:22:06.397 "trtype": "TCP", 00:22:06.397 "adrfam": "IPv4", 00:22:06.397 "traddr": "10.0.0.2", 00:22:06.397 "trsvcid": "4420" 00:22:06.397 }, 00:22:06.397 "peer_address": { 00:22:06.397 "trtype": "TCP", 00:22:06.397 "adrfam": "IPv4", 00:22:06.397 "traddr": "10.0.0.1", 00:22:06.397 "trsvcid": "59536" 00:22:06.397 }, 00:22:06.397 "auth": { 00:22:06.397 "state": "completed", 00:22:06.397 "digest": "sha384", 00:22:06.397 "dhgroup": "ffdhe8192" 00:22:06.397 } 00:22:06.397 } 00:22:06.397 ]' 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.397 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.655 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:06.655 16:27:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:07.588 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.588 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:07.588 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.588 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.588 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.588 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:22:07.588 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:07.588 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.588 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:07.588 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:07.845 16:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.103 00:22:08.103 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:08.103 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:08.103 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:08.361 { 00:22:08.361 "cntlid": 97, 00:22:08.361 "qid": 0, 00:22:08.361 "state": "enabled", 00:22:08.361 "thread": "nvmf_tgt_poll_group_000", 00:22:08.361 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:08.361 "listen_address": { 00:22:08.361 "trtype": "TCP", 00:22:08.361 "adrfam": "IPv4", 00:22:08.361 "traddr": "10.0.0.2", 00:22:08.361 "trsvcid": "4420" 00:22:08.361 }, 00:22:08.361 "peer_address": { 00:22:08.361 "trtype": "TCP", 00:22:08.361 "adrfam": "IPv4", 00:22:08.361 "traddr": "10.0.0.1", 00:22:08.361 "trsvcid": "59560" 00:22:08.361 }, 00:22:08.361 "auth": { 00:22:08.361 "state": "completed", 00:22:08.361 "digest": "sha512", 00:22:08.361 "dhgroup": "null" 00:22:08.361 } 00:22:08.361 } 00:22:08.361 ]' 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:08.361 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:08.619 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:08.619 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:08.619 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.877 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:08.877 16:27:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:09.810 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.810 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.810 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.810 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.810 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.810 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.810 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:09.810 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:09.810 16:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:10.068 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:22:10.068 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:10.068 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:10.068 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:10.068 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:10.068 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:10.068 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.068 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.068 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.068 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.069 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.069 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.069 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.327 00:22:10.327 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:10.327 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:10.327 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:10.584 { 00:22:10.584 "cntlid": 99, 00:22:10.584 "qid": 0, 00:22:10.584 "state": "enabled", 00:22:10.584 "thread": "nvmf_tgt_poll_group_000", 00:22:10.584 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:10.584 "listen_address": { 00:22:10.584 "trtype": "TCP", 00:22:10.584 "adrfam": "IPv4", 00:22:10.584 "traddr": "10.0.0.2", 00:22:10.584 "trsvcid": "4420" 00:22:10.584 }, 00:22:10.584 "peer_address": { 00:22:10.584 "trtype": "TCP", 00:22:10.584 "adrfam": "IPv4", 00:22:10.584 "traddr": "10.0.0.1", 00:22:10.584 "trsvcid": "52384" 00:22:10.584 }, 00:22:10.584 "auth": { 00:22:10.584 "state": "completed", 00:22:10.584 "digest": "sha512", 00:22:10.584 "dhgroup": "null" 00:22:10.584 } 00:22:10.584 } 00:22:10.584 ]' 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:10.584 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:10.842 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.842 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.842 16:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:11.100 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:11.100 16:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:12.032 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:12.032 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:12.033 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:12.033 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.033 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.033 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.033 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:12.033 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:12.033 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.290 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:12.548 00:22:12.548 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:12.548 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:12.548 16:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:12.806 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:12.806 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:12.806 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.806 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.806 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.806 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:12.806 { 00:22:12.806 "cntlid": 101, 00:22:12.806 "qid": 0, 00:22:12.806 "state": "enabled", 00:22:12.806 "thread": "nvmf_tgt_poll_group_000", 00:22:12.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:12.806 "listen_address": { 00:22:12.806 "trtype": "TCP", 00:22:12.806 "adrfam": "IPv4", 00:22:12.806 "traddr": "10.0.0.2", 00:22:12.806 "trsvcid": "4420" 00:22:12.806 }, 00:22:12.806 "peer_address": { 00:22:12.806 "trtype": "TCP", 00:22:12.806 "adrfam": "IPv4", 00:22:12.806 "traddr": "10.0.0.1", 00:22:12.806 "trsvcid": "52400" 00:22:12.806 }, 00:22:12.806 "auth": { 00:22:12.806 "state": "completed", 00:22:12.806 "digest": "sha512", 00:22:12.806 "dhgroup": "null" 00:22:12.806 } 00:22:12.806 } 00:22:12.806 ]' 00:22:12.806 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:12.806 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:12.806 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.063 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:13.063 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.063 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.063 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.063 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.321 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:13.321 16:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:14.253 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.253 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.253 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.253 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.253 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.253 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:14.253 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:14.253 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:22:14.510 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:22:14.510 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:14.510 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:14.510 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:22:14.510 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:14.511 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:14.511 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:14.511 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.511 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.511 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.511 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:14.511 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.511 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.768 00:22:14.768 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:14.768 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.768 16:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:15.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:15.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:15.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.026 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:15.026 { 00:22:15.026 "cntlid": 103, 00:22:15.026 "qid": 0, 00:22:15.026 "state": "enabled", 00:22:15.026 "thread": "nvmf_tgt_poll_group_000", 00:22:15.026 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.026 "listen_address": { 00:22:15.026 "trtype": "TCP", 00:22:15.026 "adrfam": "IPv4", 00:22:15.026 "traddr": "10.0.0.2", 00:22:15.026 "trsvcid": "4420" 00:22:15.026 }, 00:22:15.026 "peer_address": { 00:22:15.027 "trtype": "TCP", 00:22:15.027 "adrfam": "IPv4", 00:22:15.027 "traddr": "10.0.0.1", 00:22:15.027 "trsvcid": "52428" 00:22:15.027 }, 00:22:15.027 "auth": { 00:22:15.027 "state": "completed", 00:22:15.027 "digest": "sha512", 00:22:15.027 "dhgroup": "null" 00:22:15.027 } 00:22:15.027 } 00:22:15.027 ]' 00:22:15.027 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:15.027 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:15.027 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:15.027 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:22:15.027 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:15.027 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:15.027 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:15.027 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:15.592 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:15.592 16:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:16.157 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:16.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:16.415 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.415 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.415 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.415 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.415 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:16.415 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:16.415 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:16.415 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.673 16:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:16.931 00:22:16.931 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:16.931 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:16.931 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:17.189 { 00:22:17.189 "cntlid": 105, 00:22:17.189 "qid": 0, 00:22:17.189 "state": "enabled", 00:22:17.189 "thread": "nvmf_tgt_poll_group_000", 00:22:17.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:17.189 "listen_address": { 00:22:17.189 "trtype": "TCP", 00:22:17.189 "adrfam": "IPv4", 00:22:17.189 "traddr": "10.0.0.2", 00:22:17.189 "trsvcid": "4420" 00:22:17.189 }, 00:22:17.189 "peer_address": { 00:22:17.189 "trtype": "TCP", 00:22:17.189 "adrfam": "IPv4", 00:22:17.189 "traddr": "10.0.0.1", 00:22:17.189 "trsvcid": "52444" 00:22:17.189 }, 00:22:17.189 "auth": { 00:22:17.189 "state": "completed", 00:22:17.189 "digest": "sha512", 00:22:17.189 "dhgroup": "ffdhe2048" 00:22:17.189 } 00:22:17.189 } 00:22:17.189 ]' 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:17.189 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:17.755 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:17.756 16:28:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:18.689 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:18.689 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:18.689 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:18.689 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.689 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.689 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.689 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:18.689 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:18.689 16:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.946 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:18.947 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:19.204 00:22:19.204 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:19.204 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:19.204 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:19.462 { 00:22:19.462 "cntlid": 107, 00:22:19.462 "qid": 0, 00:22:19.462 "state": "enabled", 00:22:19.462 "thread": "nvmf_tgt_poll_group_000", 00:22:19.462 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:19.462 "listen_address": { 00:22:19.462 "trtype": "TCP", 00:22:19.462 "adrfam": "IPv4", 00:22:19.462 "traddr": "10.0.0.2", 00:22:19.462 "trsvcid": "4420" 00:22:19.462 }, 00:22:19.462 "peer_address": { 00:22:19.462 "trtype": "TCP", 00:22:19.462 "adrfam": "IPv4", 00:22:19.462 "traddr": "10.0.0.1", 00:22:19.462 "trsvcid": "52470" 00:22:19.462 }, 00:22:19.462 "auth": { 00:22:19.462 "state": "completed", 00:22:19.462 "digest": "sha512", 00:22:19.462 "dhgroup": "ffdhe2048" 00:22:19.462 } 00:22:19.462 } 00:22:19.462 ]' 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:19.462 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:19.720 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.720 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.720 16:28:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.978 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:19.978 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:20.912 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.912 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.912 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.912 16:28:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.912 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.912 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:20.912 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:20.912 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.170 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:21.429 00:22:21.429 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:21.429 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:21.429 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:21.687 { 00:22:21.687 "cntlid": 109, 00:22:21.687 "qid": 0, 00:22:21.687 "state": "enabled", 00:22:21.687 "thread": "nvmf_tgt_poll_group_000", 00:22:21.687 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.687 "listen_address": { 00:22:21.687 "trtype": "TCP", 00:22:21.687 "adrfam": "IPv4", 00:22:21.687 "traddr": "10.0.0.2", 00:22:21.687 "trsvcid": "4420" 00:22:21.687 }, 00:22:21.687 "peer_address": { 00:22:21.687 "trtype": "TCP", 00:22:21.687 "adrfam": "IPv4", 00:22:21.687 "traddr": "10.0.0.1", 00:22:21.687 "trsvcid": "35760" 00:22:21.687 }, 00:22:21.687 "auth": { 00:22:21.687 "state": "completed", 00:22:21.687 "digest": "sha512", 00:22:21.687 "dhgroup": "ffdhe2048" 00:22:21.687 } 00:22:21.687 } 00:22:21.687 ]' 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:21.687 16:28:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:21.687 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:21.687 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:21.687 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:22.252 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:22.252 16:28:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:23.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.185 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.443 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.443 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:23.443 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.443 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:23.700 00:22:23.700 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:23.700 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:23.700 16:28:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:23.958 { 00:22:23.958 "cntlid": 111, 00:22:23.958 "qid": 0, 00:22:23.958 "state": "enabled", 00:22:23.958 "thread": "nvmf_tgt_poll_group_000", 00:22:23.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:23.958 "listen_address": { 00:22:23.958 "trtype": "TCP", 00:22:23.958 "adrfam": "IPv4", 00:22:23.958 "traddr": "10.0.0.2", 00:22:23.958 "trsvcid": "4420" 00:22:23.958 }, 00:22:23.958 "peer_address": { 00:22:23.958 "trtype": "TCP", 00:22:23.958 "adrfam": "IPv4", 00:22:23.958 "traddr": "10.0.0.1", 00:22:23.958 "trsvcid": "35786" 00:22:23.958 }, 00:22:23.958 "auth": { 00:22:23.958 "state": "completed", 00:22:23.958 "digest": "sha512", 00:22:23.958 "dhgroup": "ffdhe2048" 00:22:23.958 } 00:22:23.958 } 00:22:23.958 ]' 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:23.958 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:24.524 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:24.524 16:28:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.457 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.715 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.715 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.715 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.715 16:28:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:25.972 00:22:25.972 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:25.973 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:25.973 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.230 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.230 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:26.230 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.230 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.230 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.230 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:26.230 { 00:22:26.230 "cntlid": 113, 00:22:26.230 "qid": 0, 00:22:26.230 "state": "enabled", 00:22:26.230 "thread": "nvmf_tgt_poll_group_000", 00:22:26.230 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:26.230 "listen_address": { 00:22:26.230 "trtype": "TCP", 00:22:26.230 "adrfam": "IPv4", 00:22:26.230 "traddr": "10.0.0.2", 00:22:26.230 "trsvcid": "4420" 00:22:26.230 }, 00:22:26.231 "peer_address": { 00:22:26.231 "trtype": "TCP", 00:22:26.231 "adrfam": "IPv4", 00:22:26.231 "traddr": "10.0.0.1", 00:22:26.231 "trsvcid": "35816" 00:22:26.231 }, 00:22:26.231 "auth": { 00:22:26.231 "state": "completed", 00:22:26.231 "digest": "sha512", 00:22:26.231 "dhgroup": "ffdhe3072" 00:22:26.231 } 00:22:26.231 } 00:22:26.231 ]' 00:22:26.231 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:26.231 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:26.231 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:26.231 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:26.231 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:26.231 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:26.231 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:26.231 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:26.797 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:26.797 16:28:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:27.729 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:27.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:27.729 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:27.729 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.729 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.729 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.729 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:27.729 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:27.729 16:28:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:27.986 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:22:27.986 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:27.986 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:27.986 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:27.986 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:27.987 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:27.987 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.987 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:27.987 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:27.987 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:27.987 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.987 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:27.987 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:28.245 00:22:28.245 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:28.245 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.245 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:28.502 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:28.502 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:28.502 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.502 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.502 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.502 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:28.502 { 00:22:28.502 "cntlid": 115, 00:22:28.502 "qid": 0, 00:22:28.502 "state": "enabled", 00:22:28.502 "thread": "nvmf_tgt_poll_group_000", 00:22:28.502 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:28.502 "listen_address": { 00:22:28.502 "trtype": "TCP", 00:22:28.502 "adrfam": "IPv4", 00:22:28.502 "traddr": "10.0.0.2", 00:22:28.502 "trsvcid": "4420" 00:22:28.502 }, 00:22:28.502 "peer_address": { 00:22:28.502 "trtype": "TCP", 00:22:28.502 "adrfam": "IPv4", 00:22:28.502 "traddr": "10.0.0.1", 00:22:28.502 "trsvcid": "35858" 00:22:28.502 }, 00:22:28.502 "auth": { 00:22:28.502 "state": "completed", 00:22:28.502 "digest": "sha512", 00:22:28.502 "dhgroup": "ffdhe3072" 00:22:28.502 } 00:22:28.502 } 00:22:28.502 ]' 00:22:28.502 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:28.502 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:28.502 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:28.759 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:28.759 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:28.759 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:28.759 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:28.759 16:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:29.017 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:29.017 16:28:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:29.950 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:29.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:29.950 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:29.950 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.950 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.950 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.950 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:29.950 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:29.950 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.208 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:30.466 00:22:30.466 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:30.466 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:30.466 16:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:30.724 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:30.724 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:30.724 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:30.724 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:30.724 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:30.724 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:30.724 { 00:22:30.724 "cntlid": 117, 00:22:30.724 "qid": 0, 00:22:30.724 "state": "enabled", 00:22:30.724 "thread": "nvmf_tgt_poll_group_000", 00:22:30.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:30.724 "listen_address": { 00:22:30.724 "trtype": "TCP", 00:22:30.724 "adrfam": "IPv4", 00:22:30.724 "traddr": "10.0.0.2", 00:22:30.724 "trsvcid": "4420" 00:22:30.724 }, 00:22:30.724 "peer_address": { 00:22:30.724 "trtype": "TCP", 00:22:30.724 "adrfam": "IPv4", 00:22:30.724 "traddr": "10.0.0.1", 00:22:30.724 "trsvcid": "45826" 00:22:30.724 }, 00:22:30.724 "auth": { 00:22:30.724 "state": "completed", 00:22:30.724 "digest": "sha512", 00:22:30.724 "dhgroup": "ffdhe3072" 00:22:30.724 } 00:22:30.724 } 00:22:30.724 ]' 00:22:30.724 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:30.983 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:30.983 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:30.983 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:30.983 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:30.983 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:30.983 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:30.983 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:31.242 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:31.242 16:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:32.175 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:32.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:32.175 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.175 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.175 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.175 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.175 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:32.175 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:32.175 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.433 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:32.690 00:22:32.690 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:32.690 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:32.690 16:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:32.947 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:32.947 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:32.947 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:32.947 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:32.947 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:32.947 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:32.947 { 00:22:32.947 "cntlid": 119, 00:22:32.947 "qid": 0, 00:22:32.947 "state": "enabled", 00:22:32.947 "thread": "nvmf_tgt_poll_group_000", 00:22:32.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:32.947 "listen_address": { 00:22:32.947 "trtype": "TCP", 00:22:32.947 "adrfam": "IPv4", 00:22:32.947 "traddr": "10.0.0.2", 00:22:32.947 "trsvcid": "4420" 00:22:32.947 }, 00:22:32.947 "peer_address": { 00:22:32.947 "trtype": "TCP", 00:22:32.947 "adrfam": "IPv4", 00:22:32.947 "traddr": "10.0.0.1", 00:22:32.948 "trsvcid": "45862" 00:22:32.948 }, 00:22:32.948 "auth": { 00:22:32.948 "state": "completed", 00:22:32.948 "digest": "sha512", 00:22:32.948 "dhgroup": "ffdhe3072" 00:22:32.948 } 00:22:32.948 } 00:22:32.948 ]' 00:22:32.948 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:33.205 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:33.205 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:33.205 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:22:33.205 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:33.205 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:33.205 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:33.205 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:33.463 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:33.463 16:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:34.394 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:34.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:34.394 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:34.395 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.395 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.395 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.395 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:34.395 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:34.395 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:34.395 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:34.652 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.653 16:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:34.910 00:22:34.910 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:34.910 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:34.910 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:35.167 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:35.167 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:35.167 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:35.167 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:35.426 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:35.426 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:35.426 { 00:22:35.426 "cntlid": 121, 00:22:35.426 "qid": 0, 00:22:35.426 "state": "enabled", 00:22:35.426 "thread": "nvmf_tgt_poll_group_000", 00:22:35.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:35.426 "listen_address": { 00:22:35.426 "trtype": "TCP", 00:22:35.426 "adrfam": "IPv4", 00:22:35.426 "traddr": "10.0.0.2", 00:22:35.426 "trsvcid": "4420" 00:22:35.426 }, 00:22:35.426 "peer_address": { 00:22:35.426 "trtype": "TCP", 00:22:35.426 "adrfam": "IPv4", 00:22:35.426 "traddr": "10.0.0.1", 00:22:35.426 "trsvcid": "45892" 00:22:35.426 }, 00:22:35.426 "auth": { 00:22:35.426 "state": "completed", 00:22:35.426 "digest": "sha512", 00:22:35.426 "dhgroup": "ffdhe4096" 00:22:35.426 } 00:22:35.426 } 00:22:35.426 ]' 00:22:35.426 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:35.426 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:35.426 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:35.426 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:35.426 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:35.426 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:35.426 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:35.426 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:35.684 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:35.684 16:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:36.617 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:36.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:36.618 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.618 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.618 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.618 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.618 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:36.618 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:36.618 16:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:36.875 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:37.459 00:22:37.459 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:37.459 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:37.459 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:37.459 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:37.459 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:37.459 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:37.459 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:37.459 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:37.716 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:37.716 { 00:22:37.716 "cntlid": 123, 00:22:37.716 "qid": 0, 00:22:37.716 "state": "enabled", 00:22:37.716 "thread": "nvmf_tgt_poll_group_000", 00:22:37.716 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:37.716 "listen_address": { 00:22:37.716 "trtype": "TCP", 00:22:37.716 "adrfam": "IPv4", 00:22:37.716 "traddr": "10.0.0.2", 00:22:37.716 "trsvcid": "4420" 00:22:37.716 }, 00:22:37.716 "peer_address": { 00:22:37.716 "trtype": "TCP", 00:22:37.716 "adrfam": "IPv4", 00:22:37.716 "traddr": "10.0.0.1", 00:22:37.716 "trsvcid": "45930" 00:22:37.716 }, 00:22:37.716 "auth": { 00:22:37.716 "state": "completed", 00:22:37.716 "digest": "sha512", 00:22:37.716 "dhgroup": "ffdhe4096" 00:22:37.716 } 00:22:37.716 } 00:22:37.716 ]' 00:22:37.716 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:37.716 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:37.716 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:37.716 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:37.716 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:37.716 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:37.716 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:37.716 16:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:37.974 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:37.974 16:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:38.906 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:38.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:38.906 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:38.906 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.906 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:38.906 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.906 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:38.906 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:38.906 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:39.163 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:22:39.163 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:39.163 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:39.163 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:39.163 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:39.163 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:39.163 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.163 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.164 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.164 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.164 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.164 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.164 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:39.421 00:22:39.421 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:39.421 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:39.421 16:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:39.986 { 00:22:39.986 "cntlid": 125, 00:22:39.986 "qid": 0, 00:22:39.986 "state": "enabled", 00:22:39.986 "thread": "nvmf_tgt_poll_group_000", 00:22:39.986 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:39.986 "listen_address": { 00:22:39.986 "trtype": "TCP", 00:22:39.986 "adrfam": "IPv4", 00:22:39.986 "traddr": "10.0.0.2", 00:22:39.986 "trsvcid": "4420" 00:22:39.986 }, 00:22:39.986 "peer_address": { 00:22:39.986 "trtype": "TCP", 00:22:39.986 "adrfam": "IPv4", 00:22:39.986 "traddr": "10.0.0.1", 00:22:39.986 "trsvcid": "46348" 00:22:39.986 }, 00:22:39.986 "auth": { 00:22:39.986 "state": "completed", 00:22:39.986 "digest": "sha512", 00:22:39.986 "dhgroup": "ffdhe4096" 00:22:39.986 } 00:22:39.986 } 00:22:39.986 ]' 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:39.986 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:40.244 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:40.244 16:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:41.175 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:41.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:41.175 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:41.175 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.175 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.175 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.175 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:41.175 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:41.175 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.432 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:41.689 00:22:41.689 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:41.689 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:41.689 16:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:41.947 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:41.947 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:41.947 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.947 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:41.947 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.947 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:41.947 { 00:22:41.947 "cntlid": 127, 00:22:41.947 "qid": 0, 00:22:41.947 "state": "enabled", 00:22:41.947 "thread": "nvmf_tgt_poll_group_000", 00:22:41.947 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:41.947 "listen_address": { 00:22:41.947 "trtype": "TCP", 00:22:41.947 "adrfam": "IPv4", 00:22:41.947 "traddr": "10.0.0.2", 00:22:41.947 "trsvcid": "4420" 00:22:41.947 }, 00:22:41.947 "peer_address": { 00:22:41.947 "trtype": "TCP", 00:22:41.947 "adrfam": "IPv4", 00:22:41.947 "traddr": "10.0.0.1", 00:22:41.947 "trsvcid": "46386" 00:22:41.947 }, 00:22:41.947 "auth": { 00:22:41.947 "state": "completed", 00:22:41.947 "digest": "sha512", 00:22:41.947 "dhgroup": "ffdhe4096" 00:22:41.947 } 00:22:41.947 } 00:22:41.947 ]' 00:22:41.947 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:41.947 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:42.204 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:42.204 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:22:42.204 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:42.204 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:42.204 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:42.204 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:42.461 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:42.461 16:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:43.394 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:43.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:43.394 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:43.394 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.394 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.394 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.394 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:43.394 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:43.394 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:43.394 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:43.651 16:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:44.216 00:22:44.216 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:44.216 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:44.216 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:44.473 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:44.473 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:44.473 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:44.473 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:44.473 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:44.473 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:44.473 { 00:22:44.473 "cntlid": 129, 00:22:44.473 "qid": 0, 00:22:44.473 "state": "enabled", 00:22:44.473 "thread": "nvmf_tgt_poll_group_000", 00:22:44.474 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:44.474 "listen_address": { 00:22:44.474 "trtype": "TCP", 00:22:44.474 "adrfam": "IPv4", 00:22:44.474 "traddr": "10.0.0.2", 00:22:44.474 "trsvcid": "4420" 00:22:44.474 }, 00:22:44.474 "peer_address": { 00:22:44.474 "trtype": "TCP", 00:22:44.474 "adrfam": "IPv4", 00:22:44.474 "traddr": "10.0.0.1", 00:22:44.474 "trsvcid": "46414" 00:22:44.474 }, 00:22:44.474 "auth": { 00:22:44.474 "state": "completed", 00:22:44.474 "digest": "sha512", 00:22:44.474 "dhgroup": "ffdhe6144" 00:22:44.474 } 00:22:44.474 } 00:22:44.474 ]' 00:22:44.474 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:44.474 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:44.474 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:44.474 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:44.474 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:44.474 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:44.474 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:44.474 16:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:44.731 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:44.731 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:45.662 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:45.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:45.662 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:45.662 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.663 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.663 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.663 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:45.663 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.663 16:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:45.919 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:46.484 00:22:46.484 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:46.484 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:46.484 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:46.741 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:46.741 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:46.741 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:46.741 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:46.741 16:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:46.741 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:46.741 { 00:22:46.741 "cntlid": 131, 00:22:46.741 "qid": 0, 00:22:46.741 "state": "enabled", 00:22:46.741 "thread": "nvmf_tgt_poll_group_000", 00:22:46.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:46.741 "listen_address": { 00:22:46.741 "trtype": "TCP", 00:22:46.741 "adrfam": "IPv4", 00:22:46.741 "traddr": "10.0.0.2", 00:22:46.741 "trsvcid": "4420" 00:22:46.741 }, 00:22:46.741 "peer_address": { 00:22:46.741 "trtype": "TCP", 00:22:46.741 "adrfam": "IPv4", 00:22:46.741 "traddr": "10.0.0.1", 00:22:46.741 "trsvcid": "46448" 00:22:46.741 }, 00:22:46.741 "auth": { 00:22:46.741 "state": "completed", 00:22:46.741 "digest": "sha512", 00:22:46.741 "dhgroup": "ffdhe6144" 00:22:46.741 } 00:22:46.741 } 00:22:46.741 ]' 00:22:46.741 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:46.741 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:46.741 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:46.741 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:46.741 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:46.999 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:46.999 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:46.999 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:47.255 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:47.255 16:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:48.186 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:48.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:48.186 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:48.186 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.186 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.186 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.186 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:48.187 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:48.187 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:48.444 16:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:49.009 00:22:49.009 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:49.009 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:49.009 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:49.266 { 00:22:49.266 "cntlid": 133, 00:22:49.266 "qid": 0, 00:22:49.266 "state": "enabled", 00:22:49.266 "thread": "nvmf_tgt_poll_group_000", 00:22:49.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:49.266 "listen_address": { 00:22:49.266 "trtype": "TCP", 00:22:49.266 "adrfam": "IPv4", 00:22:49.266 "traddr": "10.0.0.2", 00:22:49.266 "trsvcid": "4420" 00:22:49.266 }, 00:22:49.266 "peer_address": { 00:22:49.266 "trtype": "TCP", 00:22:49.266 "adrfam": "IPv4", 00:22:49.266 "traddr": "10.0.0.1", 00:22:49.266 "trsvcid": "46480" 00:22:49.266 }, 00:22:49.266 "auth": { 00:22:49.266 "state": "completed", 00:22:49.266 "digest": "sha512", 00:22:49.266 "dhgroup": "ffdhe6144" 00:22:49.266 } 00:22:49.266 } 00:22:49.266 ]' 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:49.266 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:49.523 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:49.523 16:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:22:50.456 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:50.456 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:50.456 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:50.456 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.456 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.456 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.456 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:50.456 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:50.456 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:22:50.713 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:50.714 16:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:51.280 00:22:51.280 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:51.280 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:51.280 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:51.538 { 00:22:51.538 "cntlid": 135, 00:22:51.538 "qid": 0, 00:22:51.538 "state": "enabled", 00:22:51.538 "thread": "nvmf_tgt_poll_group_000", 00:22:51.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:51.538 "listen_address": { 00:22:51.538 "trtype": "TCP", 00:22:51.538 "adrfam": "IPv4", 00:22:51.538 "traddr": "10.0.0.2", 00:22:51.538 "trsvcid": "4420" 00:22:51.538 }, 00:22:51.538 "peer_address": { 00:22:51.538 "trtype": "TCP", 00:22:51.538 "adrfam": "IPv4", 00:22:51.538 "traddr": "10.0.0.1", 00:22:51.538 "trsvcid": "57856" 00:22:51.538 }, 00:22:51.538 "auth": { 00:22:51.538 "state": "completed", 00:22:51.538 "digest": "sha512", 00:22:51.538 "dhgroup": "ffdhe6144" 00:22:51.538 } 00:22:51.538 } 00:22:51.538 ]' 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:51.538 16:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:52.103 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:52.103 16:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:53.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.036 16:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:53.969 00:22:53.969 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:53.969 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:53.969 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:54.227 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:54.227 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:54.227 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:54.227 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:54.227 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:54.227 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:54.227 { 00:22:54.227 "cntlid": 137, 00:22:54.227 "qid": 0, 00:22:54.227 "state": "enabled", 00:22:54.227 "thread": "nvmf_tgt_poll_group_000", 00:22:54.227 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:54.227 "listen_address": { 00:22:54.227 "trtype": "TCP", 00:22:54.227 "adrfam": "IPv4", 00:22:54.227 "traddr": "10.0.0.2", 00:22:54.227 "trsvcid": "4420" 00:22:54.227 }, 00:22:54.227 "peer_address": { 00:22:54.227 "trtype": "TCP", 00:22:54.227 "adrfam": "IPv4", 00:22:54.227 "traddr": "10.0.0.1", 00:22:54.227 "trsvcid": "57892" 00:22:54.227 }, 00:22:54.227 "auth": { 00:22:54.227 "state": "completed", 00:22:54.227 "digest": "sha512", 00:22:54.227 "dhgroup": "ffdhe8192" 00:22:54.227 } 00:22:54.227 } 00:22:54.227 ]' 00:22:54.228 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:54.228 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:54.228 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:54.228 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:54.228 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:54.485 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:54.485 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:54.485 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:54.743 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:54.743 16:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:55.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:55.676 16:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:55.676 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:55.676 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.676 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:55.676 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:56.609 00:22:56.609 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:56.609 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:56.609 16:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:56.867 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:56.867 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:56.867 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.867 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.867 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.867 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:56.867 { 00:22:56.867 "cntlid": 139, 00:22:56.867 "qid": 0, 00:22:56.867 "state": "enabled", 00:22:56.867 "thread": "nvmf_tgt_poll_group_000", 00:22:56.867 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:56.867 "listen_address": { 00:22:56.867 "trtype": "TCP", 00:22:56.867 "adrfam": "IPv4", 00:22:56.867 "traddr": "10.0.0.2", 00:22:56.867 "trsvcid": "4420" 00:22:56.867 }, 00:22:56.867 "peer_address": { 00:22:56.867 "trtype": "TCP", 00:22:56.867 "adrfam": "IPv4", 00:22:56.867 "traddr": "10.0.0.1", 00:22:56.867 "trsvcid": "57914" 00:22:56.867 }, 00:22:56.867 "auth": { 00:22:56.867 "state": "completed", 00:22:56.867 "digest": "sha512", 00:22:56.867 "dhgroup": "ffdhe8192" 00:22:56.867 } 00:22:56.867 } 00:22:56.867 ]' 00:22:56.867 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:56.868 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:56.868 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:56.868 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:56.868 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:56.868 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:56.868 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:56.868 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:57.433 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:57.433 16:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: --dhchap-ctrl-secret DHHC-1:02:MWIyNmRhZjM1MTg4ZmJjNTk3NjJjYzcwOTc4MjY5ODdlZmM1MzlkM2I1YTk1NDE0f1Nbrg==: 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:58.367 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:58.367 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:58.625 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:58.625 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.625 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:58.625 16:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:59.556 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:59.556 { 00:22:59.556 "cntlid": 141, 00:22:59.556 "qid": 0, 00:22:59.556 "state": "enabled", 00:22:59.556 "thread": "nvmf_tgt_poll_group_000", 00:22:59.556 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:59.556 "listen_address": { 00:22:59.556 "trtype": "TCP", 00:22:59.556 "adrfam": "IPv4", 00:22:59.556 "traddr": "10.0.0.2", 00:22:59.556 "trsvcid": "4420" 00:22:59.556 }, 00:22:59.556 "peer_address": { 00:22:59.556 "trtype": "TCP", 00:22:59.556 "adrfam": "IPv4", 00:22:59.556 "traddr": "10.0.0.1", 00:22:59.556 "trsvcid": "57934" 00:22:59.556 }, 00:22:59.556 "auth": { 00:22:59.556 "state": "completed", 00:22:59.556 "digest": "sha512", 00:22:59.556 "dhgroup": "ffdhe8192" 00:22:59.556 } 00:22:59.556 } 00:22:59.556 ]' 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:59.556 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:59.814 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:59.815 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:59.815 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:59.815 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:59.815 16:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:00.073 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:23:00.073 16:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:01:MzFiM2Q0ZjI0NzBlODRmMWU4MDFjYWRiODdiYzIwY2YTSwdp: 00:23:01.005 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:01.005 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:01.005 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:01.005 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.005 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.005 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.005 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:23:01.005 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:01.005 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:01.263 16:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:02.197 00:23:02.197 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:02.197 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:02.197 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:02.197 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:02.197 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:02.197 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:02.197 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:02.197 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:02.197 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:02.197 { 00:23:02.197 "cntlid": 143, 00:23:02.197 "qid": 0, 00:23:02.197 "state": "enabled", 00:23:02.197 "thread": "nvmf_tgt_poll_group_000", 00:23:02.197 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:02.197 "listen_address": { 00:23:02.197 "trtype": "TCP", 00:23:02.197 "adrfam": "IPv4", 00:23:02.197 "traddr": "10.0.0.2", 00:23:02.197 "trsvcid": "4420" 00:23:02.197 }, 00:23:02.197 "peer_address": { 00:23:02.197 "trtype": "TCP", 00:23:02.197 "adrfam": "IPv4", 00:23:02.197 "traddr": "10.0.0.1", 00:23:02.197 "trsvcid": "45842" 00:23:02.197 }, 00:23:02.197 "auth": { 00:23:02.197 "state": "completed", 00:23:02.197 "digest": "sha512", 00:23:02.197 "dhgroup": "ffdhe8192" 00:23:02.197 } 00:23:02.197 } 00:23:02.197 ]' 00:23:02.197 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:02.455 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:02.455 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:02.455 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:02.455 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:02.455 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:02.455 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:02.455 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:02.713 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:23:02.713 16:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:03.648 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:03.648 16:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:03.906 16:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:04.841 00:23:04.841 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:04.841 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:04.841 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:05.100 { 00:23:05.100 "cntlid": 145, 00:23:05.100 "qid": 0, 00:23:05.100 "state": "enabled", 00:23:05.100 "thread": "nvmf_tgt_poll_group_000", 00:23:05.100 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:05.100 "listen_address": { 00:23:05.100 "trtype": "TCP", 00:23:05.100 "adrfam": "IPv4", 00:23:05.100 "traddr": "10.0.0.2", 00:23:05.100 "trsvcid": "4420" 00:23:05.100 }, 00:23:05.100 "peer_address": { 00:23:05.100 "trtype": "TCP", 00:23:05.100 "adrfam": "IPv4", 00:23:05.100 "traddr": "10.0.0.1", 00:23:05.100 "trsvcid": "45882" 00:23:05.100 }, 00:23:05.100 "auth": { 00:23:05.100 "state": "completed", 00:23:05.100 "digest": "sha512", 00:23:05.100 "dhgroup": "ffdhe8192" 00:23:05.100 } 00:23:05.100 } 00:23:05.100 ]' 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:05.100 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:05.665 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:23:05.665 16:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:00:YmMyNzMxNzE1ZGUwZDc2MjBhNzkzNmRkNDNhNzZiNWI2MTE2Njk5NTBiYzk2OTNiHuVhTQ==: --dhchap-ctrl-secret DHHC-1:03:YWFmZWY2ZTAwM2JiNzJkNWRjNDczYjFhNmU0N2EzYTlkMmQwZDdkNTQ4MDE5MGQ5YmY2ZjA0MTkwODYzMTg0OanSkZI=: 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:06.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:06.597 16:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:23:07.162 request: 00:23:07.162 { 00:23:07.162 "name": "nvme0", 00:23:07.162 "trtype": "tcp", 00:23:07.162 "traddr": "10.0.0.2", 00:23:07.162 "adrfam": "ipv4", 00:23:07.162 "trsvcid": "4420", 00:23:07.162 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:07.162 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:07.162 "prchk_reftag": false, 00:23:07.162 "prchk_guard": false, 00:23:07.162 "hdgst": false, 00:23:07.162 "ddgst": false, 00:23:07.162 "dhchap_key": "key2", 00:23:07.162 "allow_unrecognized_csi": false, 00:23:07.162 "method": "bdev_nvme_attach_controller", 00:23:07.162 "req_id": 1 00:23:07.162 } 00:23:07.162 Got JSON-RPC error response 00:23:07.162 response: 00:23:07.162 { 00:23:07.162 "code": -5, 00:23:07.162 "message": "Input/output error" 00:23:07.162 } 00:23:07.162 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:07.162 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:07.162 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:07.419 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.420 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:07.420 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:07.420 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.420 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:07.420 16:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:08.352 request: 00:23:08.352 { 00:23:08.352 "name": "nvme0", 00:23:08.352 "trtype": "tcp", 00:23:08.352 "traddr": "10.0.0.2", 00:23:08.352 "adrfam": "ipv4", 00:23:08.352 "trsvcid": "4420", 00:23:08.352 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:08.352 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:08.352 "prchk_reftag": false, 00:23:08.352 "prchk_guard": false, 00:23:08.352 "hdgst": false, 00:23:08.352 "ddgst": false, 00:23:08.352 "dhchap_key": "key1", 00:23:08.352 "dhchap_ctrlr_key": "ckey2", 00:23:08.352 "allow_unrecognized_csi": false, 00:23:08.352 "method": "bdev_nvme_attach_controller", 00:23:08.352 "req_id": 1 00:23:08.352 } 00:23:08.352 Got JSON-RPC error response 00:23:08.352 response: 00:23:08.352 { 00:23:08.352 "code": -5, 00:23:08.352 "message": "Input/output error" 00:23:08.352 } 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.352 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.353 16:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:08.918 request: 00:23:08.918 { 00:23:08.918 "name": "nvme0", 00:23:08.918 "trtype": "tcp", 00:23:08.918 "traddr": "10.0.0.2", 00:23:08.918 "adrfam": "ipv4", 00:23:08.918 "trsvcid": "4420", 00:23:08.918 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:08.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:08.918 "prchk_reftag": false, 00:23:08.918 "prchk_guard": false, 00:23:08.918 "hdgst": false, 00:23:08.918 "ddgst": false, 00:23:08.918 "dhchap_key": "key1", 00:23:08.918 "dhchap_ctrlr_key": "ckey1", 00:23:08.918 "allow_unrecognized_csi": false, 00:23:08.918 "method": "bdev_nvme_attach_controller", 00:23:08.918 "req_id": 1 00:23:08.918 } 00:23:08.918 Got JSON-RPC error response 00:23:08.918 response: 00:23:08.918 { 00:23:08.918 "code": -5, 00:23:08.918 "message": "Input/output error" 00:23:08.918 } 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 240836 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 240836 ']' 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 240836 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 240836 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 240836' 00:23:08.918 killing process with pid 240836 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 240836 00:23:08.918 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 240836 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=263788 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 263788 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 263788 ']' 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.176 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 263788 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 263788 ']' 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:09.434 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.692 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:09.692 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:23:09.692 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:23:09.693 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.693 16:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.951 null0 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.kov 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.3a3 ]] 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.3a3 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.YXu 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.A7K ]] 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.A7K 00:23:09.951 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.naD 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.z48 ]] 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.z48 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kTA 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:09.952 16:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:11.413 nvme0n1 00:23:11.413 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:23:11.413 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:23:11.413 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:23:11.692 { 00:23:11.692 "cntlid": 1, 00:23:11.692 "qid": 0, 00:23:11.692 "state": "enabled", 00:23:11.692 "thread": "nvmf_tgt_poll_group_000", 00:23:11.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:11.692 "listen_address": { 00:23:11.692 "trtype": "TCP", 00:23:11.692 "adrfam": "IPv4", 00:23:11.692 "traddr": "10.0.0.2", 00:23:11.692 "trsvcid": "4420" 00:23:11.692 }, 00:23:11.692 "peer_address": { 00:23:11.692 "trtype": "TCP", 00:23:11.692 "adrfam": "IPv4", 00:23:11.692 "traddr": "10.0.0.1", 00:23:11.692 "trsvcid": "59012" 00:23:11.692 }, 00:23:11.692 "auth": { 00:23:11.692 "state": "completed", 00:23:11.692 "digest": "sha512", 00:23:11.692 "dhgroup": "ffdhe8192" 00:23:11.692 } 00:23:11.692 } 00:23:11.692 ]' 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:11.692 16:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:11.970 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:23:11.970 16:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:23:12.955 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:12.955 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:12.955 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:12.955 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.955 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.955 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.956 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:23:12.956 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:12.956 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:12.956 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:12.956 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:23:12.956 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:23:13.214 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:13.214 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:13.214 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:13.214 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:13.214 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.214 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:13.214 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.214 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:13.214 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.214 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.472 request: 00:23:13.472 { 00:23:13.472 "name": "nvme0", 00:23:13.472 "trtype": "tcp", 00:23:13.472 "traddr": "10.0.0.2", 00:23:13.472 "adrfam": "ipv4", 00:23:13.472 "trsvcid": "4420", 00:23:13.472 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:13.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:13.472 "prchk_reftag": false, 00:23:13.472 "prchk_guard": false, 00:23:13.472 "hdgst": false, 00:23:13.472 "ddgst": false, 00:23:13.472 "dhchap_key": "key3", 00:23:13.472 "allow_unrecognized_csi": false, 00:23:13.472 "method": "bdev_nvme_attach_controller", 00:23:13.472 "req_id": 1 00:23:13.472 } 00:23:13.472 Got JSON-RPC error response 00:23:13.472 response: 00:23:13.472 { 00:23:13.472 "code": -5, 00:23:13.472 "message": "Input/output error" 00:23:13.472 } 00:23:13.472 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:13.472 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:13.472 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:13.472 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:13.472 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:23:13.472 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:23:13.472 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:13.472 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:23:13.730 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:23:13.730 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:13.730 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:23:13.730 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:13.730 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.730 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:13.730 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:13.730 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:23:13.730 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.730 16:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:23:13.989 request: 00:23:13.989 { 00:23:13.989 "name": "nvme0", 00:23:13.989 "trtype": "tcp", 00:23:13.989 "traddr": "10.0.0.2", 00:23:13.989 "adrfam": "ipv4", 00:23:13.989 "trsvcid": "4420", 00:23:13.989 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:13.989 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:13.989 "prchk_reftag": false, 00:23:13.989 "prchk_guard": false, 00:23:13.989 "hdgst": false, 00:23:13.989 "ddgst": false, 00:23:13.989 "dhchap_key": "key3", 00:23:13.989 "allow_unrecognized_csi": false, 00:23:13.989 "method": "bdev_nvme_attach_controller", 00:23:13.989 "req_id": 1 00:23:13.989 } 00:23:13.989 Got JSON-RPC error response 00:23:13.989 response: 00:23:13.989 { 00:23:13.989 "code": -5, 00:23:13.989 "message": "Input/output error" 00:23:13.989 } 00:23:13.989 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:13.989 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:13.989 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:13.989 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:13.989 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:13.989 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:23:13.989 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:23:13.989 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:13.989 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:13.989 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:14.247 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:14.247 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.247 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.247 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:14.248 16:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:14.814 request: 00:23:14.814 { 00:23:14.814 "name": "nvme0", 00:23:14.814 "trtype": "tcp", 00:23:14.814 "traddr": "10.0.0.2", 00:23:14.814 "adrfam": "ipv4", 00:23:14.814 "trsvcid": "4420", 00:23:14.814 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:14.814 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:14.814 "prchk_reftag": false, 00:23:14.814 "prchk_guard": false, 00:23:14.814 "hdgst": false, 00:23:14.814 "ddgst": false, 00:23:14.814 "dhchap_key": "key0", 00:23:14.814 "dhchap_ctrlr_key": "key1", 00:23:14.814 "allow_unrecognized_csi": false, 00:23:14.814 "method": "bdev_nvme_attach_controller", 00:23:14.814 "req_id": 1 00:23:14.814 } 00:23:14.814 Got JSON-RPC error response 00:23:14.814 response: 00:23:14.814 { 00:23:14.814 "code": -5, 00:23:14.814 "message": "Input/output error" 00:23:14.814 } 00:23:14.814 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:14.814 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:14.814 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:14.814 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:14.814 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:23:14.814 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:14.814 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:23:15.380 nvme0n1 00:23:15.380 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:23:15.380 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:15.380 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:23:15.380 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.380 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:15.380 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:15.949 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:23:15.949 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:15.949 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:15.949 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:15.949 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:15.949 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:15.949 16:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:17.327 nvme0n1 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:23:17.327 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:17.584 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.585 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:23:17.585 16:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 -l 0 --dhchap-secret DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: --dhchap-ctrl-secret DHHC-1:03:OTYwY2VhYjFiZTU5NTk2Njc1ZWQ2ZTA4YjgxZjI4YmUwOWZiMDFlMmZkOGZmYTBmMDlmNTU0NzUyOGZmOTlmMbNvhN0=: 00:23:18.521 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:23:18.521 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:23:18.521 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:23:18.521 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:23:18.521 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:23:18.521 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:23:18.521 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:23:18.522 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:18.522 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:18.780 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:23:18.780 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:18.780 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:23:18.780 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:23:18.780 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.780 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:23:18.780 16:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:18.780 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:23:18.780 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:18.780 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:23:19.733 request: 00:23:19.733 { 00:23:19.733 "name": "nvme0", 00:23:19.733 "trtype": "tcp", 00:23:19.733 "traddr": "10.0.0.2", 00:23:19.733 "adrfam": "ipv4", 00:23:19.733 "trsvcid": "4420", 00:23:19.733 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:23:19.733 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:23:19.733 "prchk_reftag": false, 00:23:19.733 "prchk_guard": false, 00:23:19.733 "hdgst": false, 00:23:19.733 "ddgst": false, 00:23:19.733 "dhchap_key": "key1", 00:23:19.733 "allow_unrecognized_csi": false, 00:23:19.733 "method": "bdev_nvme_attach_controller", 00:23:19.733 "req_id": 1 00:23:19.733 } 00:23:19.733 Got JSON-RPC error response 00:23:19.733 response: 00:23:19.733 { 00:23:19.733 "code": -5, 00:23:19.733 "message": "Input/output error" 00:23:19.733 } 00:23:19.733 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:19.733 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:19.733 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:19.733 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:19.733 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:19.733 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:19.733 16:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:21.112 nvme0n1 00:23:21.112 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:23:21.112 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:23:21.112 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:21.370 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.370 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:21.370 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:21.628 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:21.628 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:21.628 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:21.628 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:21.628 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:23:21.628 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:21.628 16:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:23:21.886 nvme0n1 00:23:21.886 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:23:21.886 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:23:21.886 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:22.144 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.145 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:23:22.145 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: '' 2s 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:22.712 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: ]] 00:23:22.713 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:ZDQ1NWJlNzFkYzMzZWQ3N2JkNmIyMWY4Y2YxMjU5MDRFL9Xl: 00:23:22.713 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:23:22.713 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:22.713 16:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key key2 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: 2s 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: ]] 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTJiYWVlZjMwMDU1MDFmNWMxNWE4NDI1OWY3MDg2MzczYWFlYWE0ODlmYzBkZmQyckaOWw==: 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:23:24.621 16:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:23:26.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:26.532 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:26.791 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:26.791 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:26.791 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:26.791 16:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:28.165 nvme0n1 00:23:28.165 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:28.165 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.165 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:28.165 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.165 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:28.165 16:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:29.104 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:23:29.104 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.104 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:23:29.104 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.104 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:29.104 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.104 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.104 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.104 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:23:29.104 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:23:29.362 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:23:29.362 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:23:29.362 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:29.620 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.620 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:29.620 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:29.620 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:29.879 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:29.879 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:29.879 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:29.879 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:29.879 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:29.879 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.879 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:29.879 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:29.879 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:29.879 16:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:23:30.448 request: 00:23:30.448 { 00:23:30.448 "name": "nvme0", 00:23:30.448 "dhchap_key": "key1", 00:23:30.448 "dhchap_ctrlr_key": "key3", 00:23:30.448 "method": "bdev_nvme_set_keys", 00:23:30.448 "req_id": 1 00:23:30.448 } 00:23:30.448 Got JSON-RPC error response 00:23:30.448 response: 00:23:30.448 { 00:23:30.448 "code": -13, 00:23:30.448 "message": "Permission denied" 00:23:30.448 } 00:23:30.448 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:30.448 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:30.448 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:30.448 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:30.448 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:30.448 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:30.448 16:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:31.015 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:23:31.015 16:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:23:31.953 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:23:31.953 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:23:31.953 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:32.212 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:23:32.212 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key key1 00:23:32.212 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:32.212 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:32.212 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:32.212 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:32.212 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:32.213 16:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:23:33.590 nvme0n1 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key key3 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:33.590 16:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:23:34.527 request: 00:23:34.527 { 00:23:34.527 "name": "nvme0", 00:23:34.527 "dhchap_key": "key2", 00:23:34.527 "dhchap_ctrlr_key": "key0", 00:23:34.527 "method": "bdev_nvme_set_keys", 00:23:34.527 "req_id": 1 00:23:34.527 } 00:23:34.527 Got JSON-RPC error response 00:23:34.527 response: 00:23:34.527 { 00:23:34.527 "code": -13, 00:23:34.527 "message": "Permission denied" 00:23:34.527 } 00:23:34.527 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:23:34.527 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:34.527 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:34.527 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:34.527 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:34.527 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:34.527 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:34.527 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:23:34.527 16:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:23:35.905 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:23:35.905 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:23:35.905 16:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 240856 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 240856 ']' 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 240856 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 240856 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:35.905 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 240856' 00:23:35.905 killing process with pid 240856 00:23:35.906 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 240856 00:23:35.906 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 240856 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:36.475 rmmod nvme_tcp 00:23:36.475 rmmod nvme_fabrics 00:23:36.475 rmmod nvme_keyring 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 263788 ']' 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 263788 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 263788 ']' 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 263788 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 263788 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 263788' 00:23:36.475 killing process with pid 263788 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 263788 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 263788 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:36.475 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:36.736 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:36.736 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:36.737 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.737 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.737 16:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.kov /tmp/spdk.key-sha256.YXu /tmp/spdk.key-sha384.naD /tmp/spdk.key-sha512.kTA /tmp/spdk.key-sha512.3a3 /tmp/spdk.key-sha384.A7K /tmp/spdk.key-sha256.z48 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:23:38.644 00:23:38.644 real 3m32.282s 00:23:38.644 user 8m16.218s 00:23:38.644 sys 0m27.778s 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:23:38.644 ************************************ 00:23:38.644 END TEST nvmf_auth_target 00:23:38.644 ************************************ 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:38.644 ************************************ 00:23:38.644 START TEST nvmf_bdevio_no_huge 00:23:38.644 ************************************ 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:23:38.644 * Looking for test storage... 00:23:38.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:23:38.644 16:29:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:38.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.903 --rc genhtml_branch_coverage=1 00:23:38.903 --rc genhtml_function_coverage=1 00:23:38.903 --rc genhtml_legend=1 00:23:38.903 --rc geninfo_all_blocks=1 00:23:38.903 --rc geninfo_unexecuted_blocks=1 00:23:38.903 00:23:38.903 ' 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:38.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.903 --rc genhtml_branch_coverage=1 00:23:38.903 --rc genhtml_function_coverage=1 00:23:38.903 --rc genhtml_legend=1 00:23:38.903 --rc geninfo_all_blocks=1 00:23:38.903 --rc geninfo_unexecuted_blocks=1 00:23:38.903 00:23:38.903 ' 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:38.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.903 --rc genhtml_branch_coverage=1 00:23:38.903 --rc genhtml_function_coverage=1 00:23:38.903 --rc genhtml_legend=1 00:23:38.903 --rc geninfo_all_blocks=1 00:23:38.903 --rc geninfo_unexecuted_blocks=1 00:23:38.903 00:23:38.903 ' 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:38.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:38.903 --rc genhtml_branch_coverage=1 00:23:38.903 --rc genhtml_function_coverage=1 00:23:38.903 --rc genhtml_legend=1 00:23:38.903 --rc geninfo_all_blocks=1 00:23:38.903 --rc geninfo_unexecuted_blocks=1 00:23:38.903 00:23:38.903 ' 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:38.903 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:38.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:23:38.904 16:29:29 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:41.440 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:41.440 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:41.440 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:41.440 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:41.441 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:41.441 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:41.441 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:23:41.441 00:23:41.441 --- 10.0.0.2 ping statistics --- 00:23:41.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.441 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:41.441 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:41.441 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:23:41.441 00:23:41.441 --- 10.0.0.1 ping statistics --- 00:23:41.441 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:41.441 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=269047 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 269047 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 269047 ']' 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:41.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.441 [2024-11-19 16:29:31.450520] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:23:41.441 [2024-11-19 16:29:31.450591] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:23:41.441 [2024-11-19 16:29:31.524667] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:41.441 [2024-11-19 16:29:31.568682] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.441 [2024-11-19 16:29:31.568743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.441 [2024-11-19 16:29:31.568767] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.441 [2024-11-19 16:29:31.568777] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.441 [2024-11-19 16:29:31.568786] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.441 [2024-11-19 16:29:31.569768] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:41.441 [2024-11-19 16:29:31.569812] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:23:41.441 [2024-11-19 16:29:31.569900] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:41.441 [2024-11-19 16:29:31.569909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.441 [2024-11-19 16:29:31.712196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.441 Malloc0 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:41.441 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:41.442 [2024-11-19 16:29:31.749982] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:23:41.442 { 00:23:41.442 "params": { 00:23:41.442 "name": "Nvme$subsystem", 00:23:41.442 "trtype": "$TEST_TRANSPORT", 00:23:41.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:41.442 "adrfam": "ipv4", 00:23:41.442 "trsvcid": "$NVMF_PORT", 00:23:41.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:41.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:41.442 "hdgst": ${hdgst:-false}, 00:23:41.442 "ddgst": ${ddgst:-false} 00:23:41.442 }, 00:23:41.442 "method": "bdev_nvme_attach_controller" 00:23:41.442 } 00:23:41.442 EOF 00:23:41.442 )") 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:23:41.442 16:29:31 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:23:41.442 "params": { 00:23:41.442 "name": "Nvme1", 00:23:41.442 "trtype": "tcp", 00:23:41.442 "traddr": "10.0.0.2", 00:23:41.442 "adrfam": "ipv4", 00:23:41.442 "trsvcid": "4420", 00:23:41.442 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.442 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.442 "hdgst": false, 00:23:41.442 "ddgst": false 00:23:41.442 }, 00:23:41.442 "method": "bdev_nvme_attach_controller" 00:23:41.442 }' 00:23:41.700 [2024-11-19 16:29:31.796433] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:23:41.700 [2024-11-19 16:29:31.796523] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid269077 ] 00:23:41.700 [2024-11-19 16:29:31.866159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:41.700 [2024-11-19 16:29:31.916713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.700 [2024-11-19 16:29:31.916763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:41.700 [2024-11-19 16:29:31.916766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.959 I/O targets: 00:23:41.959 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:23:41.959 00:23:41.959 00:23:41.959 CUnit - A unit testing framework for C - Version 2.1-3 00:23:41.959 http://cunit.sourceforge.net/ 00:23:41.959 00:23:41.959 00:23:41.959 Suite: bdevio tests on: Nvme1n1 00:23:41.959 Test: blockdev write read block ...passed 00:23:41.959 Test: blockdev write zeroes read block ...passed 00:23:41.959 Test: blockdev write zeroes read no split ...passed 00:23:41.959 Test: blockdev write zeroes read split ...passed 00:23:41.959 Test: blockdev write zeroes read split partial ...passed 00:23:41.959 Test: blockdev reset ...[2024-11-19 16:29:32.264351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:41.959 [2024-11-19 16:29:32.264469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x223c6a0 (9): Bad file descriptor 00:23:42.218 [2024-11-19 16:29:32.324474] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:23:42.218 passed 00:23:42.218 Test: blockdev write read 8 blocks ...passed 00:23:42.218 Test: blockdev write read size > 128k ...passed 00:23:42.218 Test: blockdev write read invalid size ...passed 00:23:42.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:23:42.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:23:42.218 Test: blockdev write read max offset ...passed 00:23:42.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:23:42.218 Test: blockdev writev readv 8 blocks ...passed 00:23:42.218 Test: blockdev writev readv 30 x 1block ...passed 00:23:42.218 Test: blockdev writev readv block ...passed 00:23:42.218 Test: blockdev writev readv size > 128k ...passed 00:23:42.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:23:42.218 Test: blockdev comparev and writev ...[2024-11-19 16:29:32.539599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.218 [2024-11-19 16:29:32.539637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:42.218 [2024-11-19 16:29:32.539662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.218 [2024-11-19 16:29:32.539680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:42.218 [2024-11-19 16:29:32.540045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.218 [2024-11-19 16:29:32.540078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:42.218 [2024-11-19 16:29:32.540104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.218 [2024-11-19 16:29:32.540121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:42.218 [2024-11-19 16:29:32.540482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.218 [2024-11-19 16:29:32.540506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:42.218 [2024-11-19 16:29:32.540528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.218 [2024-11-19 16:29:32.540544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:42.218 [2024-11-19 16:29:32.540888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.218 [2024-11-19 16:29:32.540924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:42.218 [2024-11-19 16:29:32.540947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:23:42.218 [2024-11-19 16:29:32.540963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:42.478 passed 00:23:42.478 Test: blockdev nvme passthru rw ...passed 00:23:42.478 Test: blockdev nvme passthru vendor specific ...[2024-11-19 16:29:32.624343] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:42.478 [2024-11-19 16:29:32.624372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:42.478 [2024-11-19 16:29:32.624526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:42.478 [2024-11-19 16:29:32.624549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:42.478 [2024-11-19 16:29:32.624680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:42.478 [2024-11-19 16:29:32.624704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:42.478 [2024-11-19 16:29:32.624835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:42.478 [2024-11-19 16:29:32.624857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:42.478 passed 00:23:42.478 Test: blockdev nvme admin passthru ...passed 00:23:42.478 Test: blockdev copy ...passed 00:23:42.478 00:23:42.478 Run Summary: Type Total Ran Passed Failed Inactive 00:23:42.478 suites 1 1 n/a 0 0 00:23:42.478 tests 23 23 23 0 0 00:23:42.478 asserts 152 152 152 0 n/a 00:23:42.478 00:23:42.478 Elapsed time = 1.064 seconds 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.738 16:29:32 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.738 rmmod nvme_tcp 00:23:42.738 rmmod nvme_fabrics 00:23:42.738 rmmod nvme_keyring 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 269047 ']' 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 269047 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 269047 ']' 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 269047 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 269047 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 269047' 00:23:42.738 killing process with pid 269047 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 269047 00:23:42.738 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 269047 00:23:43.305 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:43.305 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:43.305 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:43.305 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:23:43.305 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:23:43.305 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:43.305 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:23:43.306 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.306 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.306 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.306 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.306 16:29:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.216 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:45.216 00:23:45.216 real 0m6.551s 00:23:45.216 user 0m10.105s 00:23:45.216 sys 0m2.624s 00:23:45.216 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.216 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:23:45.216 ************************************ 00:23:45.216 END TEST nvmf_bdevio_no_huge 00:23:45.216 ************************************ 00:23:45.216 16:29:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:45.216 16:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:45.216 16:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.216 16:29:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:45.216 ************************************ 00:23:45.216 START TEST nvmf_tls 00:23:45.216 ************************************ 00:23:45.216 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:23:45.476 * Looking for test storage... 00:23:45.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:45.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.476 --rc genhtml_branch_coverage=1 00:23:45.476 --rc genhtml_function_coverage=1 00:23:45.476 --rc genhtml_legend=1 00:23:45.476 --rc geninfo_all_blocks=1 00:23:45.476 --rc geninfo_unexecuted_blocks=1 00:23:45.476 00:23:45.476 ' 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:45.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.476 --rc genhtml_branch_coverage=1 00:23:45.476 --rc genhtml_function_coverage=1 00:23:45.476 --rc genhtml_legend=1 00:23:45.476 --rc geninfo_all_blocks=1 00:23:45.476 --rc geninfo_unexecuted_blocks=1 00:23:45.476 00:23:45.476 ' 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:45.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.476 --rc genhtml_branch_coverage=1 00:23:45.476 --rc genhtml_function_coverage=1 00:23:45.476 --rc genhtml_legend=1 00:23:45.476 --rc geninfo_all_blocks=1 00:23:45.476 --rc geninfo_unexecuted_blocks=1 00:23:45.476 00:23:45.476 ' 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:45.476 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:45.476 --rc genhtml_branch_coverage=1 00:23:45.476 --rc genhtml_function_coverage=1 00:23:45.476 --rc genhtml_legend=1 00:23:45.476 --rc geninfo_all_blocks=1 00:23:45.476 --rc geninfo_unexecuted_blocks=1 00:23:45.476 00:23:45.476 ' 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.476 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:45.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:23:45.477 16:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.013 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:48.014 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:48.014 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:48.014 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:48.014 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:48.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.325 ms 00:23:48.014 00:23:48.014 --- 10.0.0.2 ping statistics --- 00:23:48.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.014 rtt min/avg/max/mdev = 0.325/0.325/0.325/0.000 ms 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.087 ms 00:23:48.014 00:23:48.014 --- 10.0.0.1 ping statistics --- 00:23:48.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.014 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.014 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.015 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=271271 00:23:48.015 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:23:48.015 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 271271 00:23:48.015 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 271271 ']' 00:23:48.015 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.015 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.015 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.015 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.015 16:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.015 [2024-11-19 16:29:38.002137] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:23:48.015 [2024-11-19 16:29:38.002230] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.015 [2024-11-19 16:29:38.074485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.015 [2024-11-19 16:29:38.116369] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.015 [2024-11-19 16:29:38.116430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.015 [2024-11-19 16:29:38.116449] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.015 [2024-11-19 16:29:38.116460] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.015 [2024-11-19 16:29:38.116469] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.015 [2024-11-19 16:29:38.117081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.015 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.015 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:48.015 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:48.015 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.015 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:48.015 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.015 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:23:48.015 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:23:48.273 true 00:23:48.273 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:48.273 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:23:48.532 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:23:48.532 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:23:48.532 16:29:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:48.791 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:48.791 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:23:49.056 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:23:49.056 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:23:49.056 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:23:49.315 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:49.315 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:23:49.573 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:23:49.573 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:23:49.573 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:49.573 16:29:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:23:49.832 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:23:49.832 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:23:49.832 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:23:50.401 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:50.401 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:23:50.401 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:23:50.401 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:23:50.401 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:23:50.660 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:23:50.660 16:29:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:23:50.919 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:23:50.919 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:23:50.919 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:23:50.919 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:23:50.920 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:50.920 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:50.920 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:23:50.920 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:50.920 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.06KqLsDC7c 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.G1VI6XdJsa 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.06KqLsDC7c 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.G1VI6XdJsa 00:23:51.180 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:23:51.439 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:23:51.697 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.06KqLsDC7c 00:23:51.697 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.06KqLsDC7c 00:23:51.697 16:29:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:51.955 [2024-11-19 16:29:42.281814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:52.213 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:52.473 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:52.733 [2024-11-19 16:29:42.863421] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:52.733 [2024-11-19 16:29:42.863699] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:52.733 16:29:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:52.992 malloc0 00:23:52.992 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:53.253 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.06KqLsDC7c 00:23:53.512 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.772 16:29:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.06KqLsDC7c 00:24:03.763 Initializing NVMe Controllers 00:24:03.763 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.763 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:03.763 Initialization complete. Launching workers. 00:24:03.763 ======================================================== 00:24:03.763 Latency(us) 00:24:03.763 Device Information : IOPS MiB/s Average min max 00:24:03.763 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8626.90 33.70 7420.82 1092.43 8652.50 00:24:03.763 ======================================================== 00:24:03.763 Total : 8626.90 33.70 7420.82 1092.43 8652.50 00:24:03.763 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.06KqLsDC7c 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.06KqLsDC7c 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=273168 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 273168 /var/tmp/bdevperf.sock 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 273168 ']' 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:03.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:03.763 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:04.023 [2024-11-19 16:29:54.133492] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:04.023 [2024-11-19 16:29:54.133566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid273168 ] 00:24:04.023 [2024-11-19 16:29:54.199447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:04.023 [2024-11-19 16:29:54.244259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.284 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:04.284 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:04.284 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.06KqLsDC7c 00:24:04.543 16:29:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:04.801 [2024-11-19 16:29:54.937811] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:04.801 TLSTESTn1 00:24:04.801 16:29:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:05.061 Running I/O for 10 seconds... 00:24:06.938 3395.00 IOPS, 13.26 MiB/s [2024-11-19T15:29:58.217Z] 3461.50 IOPS, 13.52 MiB/s [2024-11-19T15:29:59.601Z] 3450.00 IOPS, 13.48 MiB/s [2024-11-19T15:30:00.540Z] 3405.50 IOPS, 13.30 MiB/s [2024-11-19T15:30:01.477Z] 3402.80 IOPS, 13.29 MiB/s [2024-11-19T15:30:02.413Z] 3395.00 IOPS, 13.26 MiB/s [2024-11-19T15:30:03.351Z] 3405.43 IOPS, 13.30 MiB/s [2024-11-19T15:30:04.286Z] 3410.88 IOPS, 13.32 MiB/s [2024-11-19T15:30:05.221Z] 3413.44 IOPS, 13.33 MiB/s [2024-11-19T15:30:05.481Z] 3398.60 IOPS, 13.28 MiB/s 00:24:15.142 Latency(us) 00:24:15.142 [2024-11-19T15:30:05.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.142 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:15.142 Verification LBA range: start 0x0 length 0x2000 00:24:15.142 TLSTESTn1 : 10.04 3397.35 13.27 0.00 0.00 37587.09 10922.67 40001.23 00:24:15.142 [2024-11-19T15:30:05.481Z] =================================================================================================================== 00:24:15.142 [2024-11-19T15:30:05.481Z] Total : 3397.35 13.27 0.00 0.00 37587.09 10922.67 40001.23 00:24:15.142 { 00:24:15.142 "results": [ 00:24:15.142 { 00:24:15.142 "job": "TLSTESTn1", 00:24:15.142 "core_mask": "0x4", 00:24:15.142 "workload": "verify", 00:24:15.142 "status": "finished", 00:24:15.142 "verify_range": { 00:24:15.142 "start": 0, 00:24:15.142 "length": 8192 00:24:15.142 }, 00:24:15.142 "queue_depth": 128, 00:24:15.142 "io_size": 4096, 00:24:15.142 "runtime": 10.04105, 00:24:15.142 "iops": 3397.353862394869, 00:24:15.142 "mibps": 13.270913524979957, 00:24:15.142 "io_failed": 0, 00:24:15.142 "io_timeout": 0, 00:24:15.142 "avg_latency_us": 37587.08834598736, 00:24:15.142 "min_latency_us": 10922.666666666666, 00:24:15.142 "max_latency_us": 40001.23259259259 00:24:15.142 } 00:24:15.142 ], 00:24:15.142 "core_count": 1 00:24:15.142 } 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 273168 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 273168 ']' 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 273168 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 273168 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 273168' 00:24:15.142 killing process with pid 273168 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 273168 00:24:15.142 Received shutdown signal, test time was about 10.000000 seconds 00:24:15.142 00:24:15.142 Latency(us) 00:24:15.142 [2024-11-19T15:30:05.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:15.142 [2024-11-19T15:30:05.481Z] =================================================================================================================== 00:24:15.142 [2024-11-19T15:30:05.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 273168 00:24:15.142 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G1VI6XdJsa 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G1VI6XdJsa 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G1VI6XdJsa 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.G1VI6XdJsa 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274599 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274599 /var/tmp/bdevperf.sock 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 274599 ']' 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:15.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:15.401 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:15.401 [2024-11-19 16:30:05.528476] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:15.401 [2024-11-19 16:30:05.528555] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274599 ] 00:24:15.401 [2024-11-19 16:30:05.595608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:15.401 [2024-11-19 16:30:05.642337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.659 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:15.659 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:15.659 16:30:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G1VI6XdJsa 00:24:15.917 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:16.176 [2024-11-19 16:30:06.268492] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:16.176 [2024-11-19 16:30:06.275409] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:16.176 [2024-11-19 16:30:06.275838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33370 (107): Transport endpoint is not connected 00:24:16.176 [2024-11-19 16:30:06.276828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd33370 (9): Bad file descriptor 00:24:16.176 [2024-11-19 16:30:06.277828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:16.176 [2024-11-19 16:30:06.277851] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:16.176 [2024-11-19 16:30:06.277864] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:16.176 [2024-11-19 16:30:06.277882] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:16.176 request: 00:24:16.176 { 00:24:16.176 "name": "TLSTEST", 00:24:16.176 "trtype": "tcp", 00:24:16.176 "traddr": "10.0.0.2", 00:24:16.176 "adrfam": "ipv4", 00:24:16.176 "trsvcid": "4420", 00:24:16.176 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:16.176 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:16.176 "prchk_reftag": false, 00:24:16.176 "prchk_guard": false, 00:24:16.176 "hdgst": false, 00:24:16.176 "ddgst": false, 00:24:16.176 "psk": "key0", 00:24:16.176 "allow_unrecognized_csi": false, 00:24:16.176 "method": "bdev_nvme_attach_controller", 00:24:16.176 "req_id": 1 00:24:16.176 } 00:24:16.176 Got JSON-RPC error response 00:24:16.176 response: 00:24:16.176 { 00:24:16.176 "code": -5, 00:24:16.176 "message": "Input/output error" 00:24:16.176 } 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274599 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 274599 ']' 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 274599 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274599 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274599' 00:24:16.176 killing process with pid 274599 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 274599 00:24:16.176 Received shutdown signal, test time was about 10.000000 seconds 00:24:16.176 00:24:16.176 Latency(us) 00:24:16.176 [2024-11-19T15:30:06.515Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.176 [2024-11-19T15:30:06.515Z] =================================================================================================================== 00:24:16.176 [2024-11-19T15:30:06.515Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:16.176 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 274599 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.06KqLsDC7c 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.06KqLsDC7c 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.06KqLsDC7c 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.06KqLsDC7c 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=274739 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 274739 /var/tmp/bdevperf.sock 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 274739 ']' 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:16.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:16.435 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:16.435 [2024-11-19 16:30:06.577996] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:16.435 [2024-11-19 16:30:06.578104] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid274739 ] 00:24:16.435 [2024-11-19 16:30:06.644959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.435 [2024-11-19 16:30:06.688456] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:16.692 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:16.692 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:16.692 16:30:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.06KqLsDC7c 00:24:16.950 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:24:17.208 [2024-11-19 16:30:07.325147] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:17.208 [2024-11-19 16:30:07.330660] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:17.208 [2024-11-19 16:30:07.330693] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:24:17.208 [2024-11-19 16:30:07.330743] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:17.209 [2024-11-19 16:30:07.331319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a36370 (107): Transport endpoint is not connected 00:24:17.209 [2024-11-19 16:30:07.332308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a36370 (9): Bad file descriptor 00:24:17.209 [2024-11-19 16:30:07.333306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:24:17.209 [2024-11-19 16:30:07.333330] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:17.209 [2024-11-19 16:30:07.333345] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:24:17.209 [2024-11-19 16:30:07.333387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:24:17.209 request: 00:24:17.209 { 00:24:17.209 "name": "TLSTEST", 00:24:17.209 "trtype": "tcp", 00:24:17.209 "traddr": "10.0.0.2", 00:24:17.209 "adrfam": "ipv4", 00:24:17.209 "trsvcid": "4420", 00:24:17.209 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:17.209 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:17.209 "prchk_reftag": false, 00:24:17.209 "prchk_guard": false, 00:24:17.209 "hdgst": false, 00:24:17.209 "ddgst": false, 00:24:17.209 "psk": "key0", 00:24:17.209 "allow_unrecognized_csi": false, 00:24:17.209 "method": "bdev_nvme_attach_controller", 00:24:17.209 "req_id": 1 00:24:17.209 } 00:24:17.209 Got JSON-RPC error response 00:24:17.209 response: 00:24:17.209 { 00:24:17.209 "code": -5, 00:24:17.209 "message": "Input/output error" 00:24:17.209 } 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 274739 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 274739 ']' 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 274739 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 274739 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 274739' 00:24:17.209 killing process with pid 274739 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 274739 00:24:17.209 Received shutdown signal, test time was about 10.000000 seconds 00:24:17.209 00:24:17.209 Latency(us) 00:24:17.209 [2024-11-19T15:30:07.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.209 [2024-11-19T15:30:07.548Z] =================================================================================================================== 00:24:17.209 [2024-11-19T15:30:07.548Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:17.209 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 274739 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.06KqLsDC7c 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.06KqLsDC7c 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.06KqLsDC7c 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.06KqLsDC7c 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275034 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275034 /var/tmp/bdevperf.sock 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275034 ']' 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:17.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:17.467 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:17.467 [2024-11-19 16:30:07.613024] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:17.467 [2024-11-19 16:30:07.613132] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275034 ] 00:24:17.467 [2024-11-19 16:30:07.683783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.467 [2024-11-19 16:30:07.731683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:17.724 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:17.724 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:17.724 16:30:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.06KqLsDC7c 00:24:17.982 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:18.240 [2024-11-19 16:30:08.402655] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:18.240 [2024-11-19 16:30:08.410779] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:18.240 [2024-11-19 16:30:08.410810] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:24:18.240 [2024-11-19 16:30:08.410864] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:24:18.240 [2024-11-19 16:30:08.411660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x839370 (107): Transport endpoint is not connected 00:24:18.240 [2024-11-19 16:30:08.412652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x839370 (9): Bad file descriptor 00:24:18.240 [2024-11-19 16:30:08.413651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:24:18.240 [2024-11-19 16:30:08.413671] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:24:18.240 [2024-11-19 16:30:08.413683] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:24:18.240 [2024-11-19 16:30:08.413701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:24:18.240 request: 00:24:18.240 { 00:24:18.240 "name": "TLSTEST", 00:24:18.240 "trtype": "tcp", 00:24:18.240 "traddr": "10.0.0.2", 00:24:18.240 "adrfam": "ipv4", 00:24:18.240 "trsvcid": "4420", 00:24:18.240 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:18.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:18.240 "prchk_reftag": false, 00:24:18.240 "prchk_guard": false, 00:24:18.240 "hdgst": false, 00:24:18.240 "ddgst": false, 00:24:18.240 "psk": "key0", 00:24:18.240 "allow_unrecognized_csi": false, 00:24:18.240 "method": "bdev_nvme_attach_controller", 00:24:18.240 "req_id": 1 00:24:18.240 } 00:24:18.240 Got JSON-RPC error response 00:24:18.240 response: 00:24:18.240 { 00:24:18.240 "code": -5, 00:24:18.240 "message": "Input/output error" 00:24:18.240 } 00:24:18.240 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275034 00:24:18.240 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275034 ']' 00:24:18.240 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275034 00:24:18.240 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:18.240 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:18.240 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275034 00:24:18.240 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:18.240 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:18.240 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275034' 00:24:18.240 killing process with pid 275034 00:24:18.240 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275034 00:24:18.240 Received shutdown signal, test time was about 10.000000 seconds 00:24:18.240 00:24:18.240 Latency(us) 00:24:18.240 [2024-11-19T15:30:08.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:18.241 [2024-11-19T15:30:08.580Z] =================================================================================================================== 00:24:18.241 [2024-11-19T15:30:08.580Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:18.241 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275034 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:18.499 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275479 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275479 /var/tmp/bdevperf.sock 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275479 ']' 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:18.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:18.500 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:18.500 [2024-11-19 16:30:08.692854] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:18.500 [2024-11-19 16:30:08.692943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275479 ] 00:24:18.500 [2024-11-19 16:30:08.762377] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:18.500 [2024-11-19 16:30:08.811552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:18.758 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:18.758 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:18.758 16:30:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:24:19.017 [2024-11-19 16:30:09.194063] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:24:19.017 [2024-11-19 16:30:09.194116] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:19.017 request: 00:24:19.017 { 00:24:19.017 "name": "key0", 00:24:19.017 "path": "", 00:24:19.017 "method": "keyring_file_add_key", 00:24:19.017 "req_id": 1 00:24:19.017 } 00:24:19.017 Got JSON-RPC error response 00:24:19.017 response: 00:24:19.017 { 00:24:19.017 "code": -1, 00:24:19.017 "message": "Operation not permitted" 00:24:19.017 } 00:24:19.017 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:19.274 [2024-11-19 16:30:09.466936] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:19.274 [2024-11-19 16:30:09.467010] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:19.274 request: 00:24:19.274 { 00:24:19.274 "name": "TLSTEST", 00:24:19.274 "trtype": "tcp", 00:24:19.274 "traddr": "10.0.0.2", 00:24:19.274 "adrfam": "ipv4", 00:24:19.274 "trsvcid": "4420", 00:24:19.274 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:19.274 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:19.274 "prchk_reftag": false, 00:24:19.274 "prchk_guard": false, 00:24:19.274 "hdgst": false, 00:24:19.274 "ddgst": false, 00:24:19.274 "psk": "key0", 00:24:19.274 "allow_unrecognized_csi": false, 00:24:19.274 "method": "bdev_nvme_attach_controller", 00:24:19.274 "req_id": 1 00:24:19.274 } 00:24:19.274 Got JSON-RPC error response 00:24:19.274 response: 00:24:19.274 { 00:24:19.274 "code": -126, 00:24:19.274 "message": "Required key not available" 00:24:19.274 } 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 275479 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275479 ']' 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275479 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275479 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275479' 00:24:19.274 killing process with pid 275479 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275479 00:24:19.274 Received shutdown signal, test time was about 10.000000 seconds 00:24:19.274 00:24:19.274 Latency(us) 00:24:19.274 [2024-11-19T15:30:09.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:19.274 [2024-11-19T15:30:09.613Z] =================================================================================================================== 00:24:19.274 [2024-11-19T15:30:09.613Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:19.274 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275479 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 271271 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 271271 ']' 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 271271 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 271271 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 271271' 00:24:19.532 killing process with pid 271271 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 271271 00:24:19.532 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 271271 00:24:19.790 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:24:19.790 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:24:19.790 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:24:19.790 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:24:19.790 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:24:19.790 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:24:19.790 16:30:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Um4T2NlSTs 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Um4T2NlSTs 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=275676 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 275676 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275676 ']' 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:19.790 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:19.790 [2024-11-19 16:30:10.074391] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:19.790 [2024-11-19 16:30:10.074510] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:20.049 [2024-11-19 16:30:10.150121] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.049 [2024-11-19 16:30:10.193689] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:20.049 [2024-11-19 16:30:10.193746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:20.049 [2024-11-19 16:30:10.193770] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:20.049 [2024-11-19 16:30:10.193796] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:20.049 [2024-11-19 16:30:10.193805] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:20.049 [2024-11-19 16:30:10.194426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:20.049 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:20.049 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:20.049 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:20.049 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:20.049 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:20.049 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:20.049 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Um4T2NlSTs 00:24:20.049 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Um4T2NlSTs 00:24:20.049 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:20.306 [2024-11-19 16:30:10.585202] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:20.306 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:20.564 16:30:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:20.823 [2024-11-19 16:30:11.126700] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:20.823 [2024-11-19 16:30:11.126965] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:20.823 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:21.081 malloc0 00:24:21.081 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:21.647 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Um4T2NlSTs 00:24:21.647 16:30:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Um4T2NlSTs 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Um4T2NlSTs 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=275965 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 275965 /var/tmp/bdevperf.sock 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 275965 ']' 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:21.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:21.905 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:22.163 [2024-11-19 16:30:12.266566] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:22.163 [2024-11-19 16:30:12.266656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid275965 ] 00:24:22.163 [2024-11-19 16:30:12.331777] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.163 [2024-11-19 16:30:12.376503] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:22.163 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.163 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:22.163 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Um4T2NlSTs 00:24:22.727 16:30:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:22.727 [2024-11-19 16:30:13.006862] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:22.984 TLSTESTn1 00:24:22.984 16:30:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:22.984 Running I/O for 10 seconds... 00:24:25.290 3114.00 IOPS, 12.16 MiB/s [2024-11-19T15:30:16.563Z] 3239.00 IOPS, 12.65 MiB/s [2024-11-19T15:30:17.497Z] 3303.00 IOPS, 12.90 MiB/s [2024-11-19T15:30:18.430Z] 3346.75 IOPS, 13.07 MiB/s [2024-11-19T15:30:19.367Z] 3384.60 IOPS, 13.22 MiB/s [2024-11-19T15:30:20.304Z] 3404.00 IOPS, 13.30 MiB/s [2024-11-19T15:30:21.237Z] 3399.14 IOPS, 13.28 MiB/s [2024-11-19T15:30:22.613Z] 3405.00 IOPS, 13.30 MiB/s [2024-11-19T15:30:23.547Z] 3377.89 IOPS, 13.19 MiB/s [2024-11-19T15:30:23.547Z] 3387.30 IOPS, 13.23 MiB/s 00:24:33.208 Latency(us) 00:24:33.208 [2024-11-19T15:30:23.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.208 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:33.208 Verification LBA range: start 0x0 length 0x2000 00:24:33.208 TLSTESTn1 : 10.02 3392.83 13.25 0.00 0.00 37662.37 8252.68 57477.50 00:24:33.208 [2024-11-19T15:30:23.547Z] =================================================================================================================== 00:24:33.208 [2024-11-19T15:30:23.547Z] Total : 3392.83 13.25 0.00 0.00 37662.37 8252.68 57477.50 00:24:33.208 { 00:24:33.208 "results": [ 00:24:33.208 { 00:24:33.208 "job": "TLSTESTn1", 00:24:33.208 "core_mask": "0x4", 00:24:33.208 "workload": "verify", 00:24:33.208 "status": "finished", 00:24:33.208 "verify_range": { 00:24:33.208 "start": 0, 00:24:33.208 "length": 8192 00:24:33.208 }, 00:24:33.208 "queue_depth": 128, 00:24:33.208 "io_size": 4096, 00:24:33.208 "runtime": 10.021142, 00:24:33.208 "iops": 3392.8268853988898, 00:24:33.208 "mibps": 13.253230021089413, 00:24:33.208 "io_failed": 0, 00:24:33.208 "io_timeout": 0, 00:24:33.208 "avg_latency_us": 37662.37268775599, 00:24:33.208 "min_latency_us": 8252.68148148148, 00:24:33.208 "max_latency_us": 57477.49925925926 00:24:33.208 } 00:24:33.208 ], 00:24:33.208 "core_count": 1 00:24:33.208 } 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 275965 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275965 ']' 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275965 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275965 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275965' 00:24:33.208 killing process with pid 275965 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275965 00:24:33.208 Received shutdown signal, test time was about 10.000000 seconds 00:24:33.208 00:24:33.208 Latency(us) 00:24:33.208 [2024-11-19T15:30:23.547Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:33.208 [2024-11-19T15:30:23.547Z] =================================================================================================================== 00:24:33.208 [2024-11-19T15:30:23.547Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275965 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Um4T2NlSTs 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Um4T2NlSTs 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Um4T2NlSTs 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Um4T2NlSTs 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Um4T2NlSTs 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=277281 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 277281 /var/tmp/bdevperf.sock 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277281 ']' 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:33.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.208 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:33.208 [2024-11-19 16:30:23.540328] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:33.208 [2024-11-19 16:30:23.540426] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid277281 ] 00:24:33.467 [2024-11-19 16:30:23.608469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:33.467 [2024-11-19 16:30:23.653452] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.467 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.467 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:33.467 16:30:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Um4T2NlSTs 00:24:33.724 [2024-11-19 16:30:24.018990] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Um4T2NlSTs': 0100666 00:24:33.724 [2024-11-19 16:30:24.019023] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:33.724 request: 00:24:33.724 { 00:24:33.724 "name": "key0", 00:24:33.724 "path": "/tmp/tmp.Um4T2NlSTs", 00:24:33.724 "method": "keyring_file_add_key", 00:24:33.724 "req_id": 1 00:24:33.724 } 00:24:33.724 Got JSON-RPC error response 00:24:33.724 response: 00:24:33.724 { 00:24:33.724 "code": -1, 00:24:33.724 "message": "Operation not permitted" 00:24:33.724 } 00:24:33.724 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:33.982 [2024-11-19 16:30:24.283805] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.982 [2024-11-19 16:30:24.283856] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:24:33.982 request: 00:24:33.982 { 00:24:33.982 "name": "TLSTEST", 00:24:33.982 "trtype": "tcp", 00:24:33.982 "traddr": "10.0.0.2", 00:24:33.982 "adrfam": "ipv4", 00:24:33.982 "trsvcid": "4420", 00:24:33.982 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.982 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.982 "prchk_reftag": false, 00:24:33.982 "prchk_guard": false, 00:24:33.982 "hdgst": false, 00:24:33.982 "ddgst": false, 00:24:33.982 "psk": "key0", 00:24:33.982 "allow_unrecognized_csi": false, 00:24:33.982 "method": "bdev_nvme_attach_controller", 00:24:33.982 "req_id": 1 00:24:33.982 } 00:24:33.982 Got JSON-RPC error response 00:24:33.982 response: 00:24:33.982 { 00:24:33.982 "code": -126, 00:24:33.982 "message": "Required key not available" 00:24:33.982 } 00:24:33.982 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 277281 00:24:33.982 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277281 ']' 00:24:33.982 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277281 00:24:33.982 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:33.982 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.982 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277281 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277281' 00:24:34.240 killing process with pid 277281 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277281 00:24:34.240 Received shutdown signal, test time was about 10.000000 seconds 00:24:34.240 00:24:34.240 Latency(us) 00:24:34.240 [2024-11-19T15:30:24.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:34.240 [2024-11-19T15:30:24.579Z] =================================================================================================================== 00:24:34.240 [2024-11-19T15:30:24.579Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277281 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 275676 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 275676 ']' 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 275676 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 275676 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 275676' 00:24:34.240 killing process with pid 275676 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 275676 00:24:34.240 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 275676 00:24:34.500 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:24:34.500 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:34.500 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:34.500 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.500 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=277430 00:24:34.500 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:34.500 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 277430 00:24:34.500 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277430 ']' 00:24:34.501 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:34.501 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:34.501 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:34.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:34.501 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:34.501 16:30:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.501 [2024-11-19 16:30:24.828594] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:34.501 [2024-11-19 16:30:24.828696] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:34.760 [2024-11-19 16:30:24.897739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.760 [2024-11-19 16:30:24.937829] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:34.760 [2024-11-19 16:30:24.937905] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:34.760 [2024-11-19 16:30:24.937928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:34.760 [2024-11-19 16:30:24.937938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:34.760 [2024-11-19 16:30:24.937948] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:34.760 [2024-11-19 16:30:24.938536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Um4T2NlSTs 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Um4T2NlSTs 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Um4T2NlSTs 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Um4T2NlSTs 00:24:34.760 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:35.018 [2024-11-19 16:30:25.325329] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.018 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:35.583 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:35.584 [2024-11-19 16:30:25.870834] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:35.584 [2024-11-19 16:30:25.871129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.584 16:30:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:35.842 malloc0 00:24:35.842 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:36.099 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Um4T2NlSTs 00:24:36.357 [2024-11-19 16:30:26.671090] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Um4T2NlSTs': 0100666 00:24:36.357 [2024-11-19 16:30:26.671140] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:24:36.357 request: 00:24:36.357 { 00:24:36.357 "name": "key0", 00:24:36.357 "path": "/tmp/tmp.Um4T2NlSTs", 00:24:36.357 "method": "keyring_file_add_key", 00:24:36.357 "req_id": 1 00:24:36.357 } 00:24:36.357 Got JSON-RPC error response 00:24:36.357 response: 00:24:36.357 { 00:24:36.357 "code": -1, 00:24:36.357 "message": "Operation not permitted" 00:24:36.357 } 00:24:36.357 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:36.615 [2024-11-19 16:30:26.935831] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:24:36.615 [2024-11-19 16:30:26.935888] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:24:36.615 request: 00:24:36.615 { 00:24:36.615 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:36.615 "host": "nqn.2016-06.io.spdk:host1", 00:24:36.615 "psk": "key0", 00:24:36.615 "method": "nvmf_subsystem_add_host", 00:24:36.615 "req_id": 1 00:24:36.615 } 00:24:36.615 Got JSON-RPC error response 00:24:36.615 response: 00:24:36.615 { 00:24:36.615 "code": -32603, 00:24:36.615 "message": "Internal error" 00:24:36.615 } 00:24:36.873 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:24:36.873 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:36.873 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:36.873 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:36.873 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 277430 00:24:36.873 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277430 ']' 00:24:36.873 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277430 00:24:36.873 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:36.874 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:36.874 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277430 00:24:36.874 16:30:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277430' 00:24:36.874 killing process with pid 277430 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277430 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277430 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Um4T2NlSTs 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=277725 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 277725 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 277725 ']' 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:36.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:36.874 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.132 [2024-11-19 16:30:27.247584] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:37.132 [2024-11-19 16:30:27.247675] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:37.132 [2024-11-19 16:30:27.318085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.132 [2024-11-19 16:30:27.364098] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:37.132 [2024-11-19 16:30:27.364153] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:37.132 [2024-11-19 16:30:27.364176] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:37.132 [2024-11-19 16:30:27.364187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:37.132 [2024-11-19 16:30:27.364197] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:37.132 [2024-11-19 16:30:27.364733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:37.390 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:37.390 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:37.390 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:37.390 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:37.390 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:37.390 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:37.390 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Um4T2NlSTs 00:24:37.390 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Um4T2NlSTs 00:24:37.390 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:37.649 [2024-11-19 16:30:27.758890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.649 16:30:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:37.907 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:38.166 [2024-11-19 16:30:28.308433] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:38.166 [2024-11-19 16:30:28.308707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:38.166 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:38.424 malloc0 00:24:38.424 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:38.681 16:30:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Um4T2NlSTs 00:24:38.948 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:39.205 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=278016 00:24:39.205 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:39.205 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:39.205 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 278016 /var/tmp/bdevperf.sock 00:24:39.205 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278016 ']' 00:24:39.205 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:39.205 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:39.205 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:39.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:39.205 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:39.205 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:39.205 [2024-11-19 16:30:29.472172] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:39.205 [2024-11-19 16:30:29.472248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278016 ] 00:24:39.205 [2024-11-19 16:30:29.537225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.463 [2024-11-19 16:30:29.582624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:39.463 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:39.463 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:39.463 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Um4T2NlSTs 00:24:39.721 16:30:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:39.980 [2024-11-19 16:30:30.221195] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.980 TLSTESTn1 00:24:39.980 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:24:40.546 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:24:40.546 "subsystems": [ 00:24:40.546 { 00:24:40.546 "subsystem": "keyring", 00:24:40.546 "config": [ 00:24:40.546 { 00:24:40.546 "method": "keyring_file_add_key", 00:24:40.546 "params": { 00:24:40.546 "name": "key0", 00:24:40.546 "path": "/tmp/tmp.Um4T2NlSTs" 00:24:40.546 } 00:24:40.546 } 00:24:40.546 ] 00:24:40.546 }, 00:24:40.546 { 00:24:40.546 "subsystem": "iobuf", 00:24:40.546 "config": [ 00:24:40.546 { 00:24:40.546 "method": "iobuf_set_options", 00:24:40.546 "params": { 00:24:40.546 "small_pool_count": 8192, 00:24:40.546 "large_pool_count": 1024, 00:24:40.546 "small_bufsize": 8192, 00:24:40.546 "large_bufsize": 135168, 00:24:40.546 "enable_numa": false 00:24:40.546 } 00:24:40.546 } 00:24:40.546 ] 00:24:40.546 }, 00:24:40.546 { 00:24:40.546 "subsystem": "sock", 00:24:40.546 "config": [ 00:24:40.546 { 00:24:40.546 "method": "sock_set_default_impl", 00:24:40.546 "params": { 00:24:40.546 "impl_name": "posix" 00:24:40.546 } 00:24:40.546 }, 00:24:40.546 { 00:24:40.546 "method": "sock_impl_set_options", 00:24:40.546 "params": { 00:24:40.546 "impl_name": "ssl", 00:24:40.546 "recv_buf_size": 4096, 00:24:40.546 "send_buf_size": 4096, 00:24:40.546 "enable_recv_pipe": true, 00:24:40.546 "enable_quickack": false, 00:24:40.546 "enable_placement_id": 0, 00:24:40.546 "enable_zerocopy_send_server": true, 00:24:40.546 "enable_zerocopy_send_client": false, 00:24:40.546 "zerocopy_threshold": 0, 00:24:40.546 "tls_version": 0, 00:24:40.546 "enable_ktls": false 00:24:40.546 } 00:24:40.546 }, 00:24:40.546 { 00:24:40.546 "method": "sock_impl_set_options", 00:24:40.546 "params": { 00:24:40.546 "impl_name": "posix", 00:24:40.546 "recv_buf_size": 2097152, 00:24:40.546 "send_buf_size": 2097152, 00:24:40.546 "enable_recv_pipe": true, 00:24:40.546 "enable_quickack": false, 00:24:40.546 "enable_placement_id": 0, 00:24:40.546 "enable_zerocopy_send_server": true, 00:24:40.546 "enable_zerocopy_send_client": false, 00:24:40.546 "zerocopy_threshold": 0, 00:24:40.546 "tls_version": 0, 00:24:40.546 "enable_ktls": false 00:24:40.546 } 00:24:40.546 } 00:24:40.546 ] 00:24:40.546 }, 00:24:40.546 { 00:24:40.546 "subsystem": "vmd", 00:24:40.546 "config": [] 00:24:40.546 }, 00:24:40.546 { 00:24:40.546 "subsystem": "accel", 00:24:40.546 "config": [ 00:24:40.546 { 00:24:40.546 "method": "accel_set_options", 00:24:40.546 "params": { 00:24:40.546 "small_cache_size": 128, 00:24:40.546 "large_cache_size": 16, 00:24:40.546 "task_count": 2048, 00:24:40.546 "sequence_count": 2048, 00:24:40.546 "buf_count": 2048 00:24:40.546 } 00:24:40.546 } 00:24:40.546 ] 00:24:40.546 }, 00:24:40.546 { 00:24:40.546 "subsystem": "bdev", 00:24:40.546 "config": [ 00:24:40.546 { 00:24:40.546 "method": "bdev_set_options", 00:24:40.546 "params": { 00:24:40.546 "bdev_io_pool_size": 65535, 00:24:40.546 "bdev_io_cache_size": 256, 00:24:40.546 "bdev_auto_examine": true, 00:24:40.546 "iobuf_small_cache_size": 128, 00:24:40.546 "iobuf_large_cache_size": 16 00:24:40.546 } 00:24:40.546 }, 00:24:40.546 { 00:24:40.546 "method": "bdev_raid_set_options", 00:24:40.546 "params": { 00:24:40.546 "process_window_size_kb": 1024, 00:24:40.546 "process_max_bandwidth_mb_sec": 0 00:24:40.546 } 00:24:40.546 }, 00:24:40.546 { 00:24:40.546 "method": "bdev_iscsi_set_options", 00:24:40.546 "params": { 00:24:40.546 "timeout_sec": 30 00:24:40.546 } 00:24:40.546 }, 00:24:40.546 { 00:24:40.546 "method": "bdev_nvme_set_options", 00:24:40.546 "params": { 00:24:40.546 "action_on_timeout": "none", 00:24:40.546 "timeout_us": 0, 00:24:40.546 "timeout_admin_us": 0, 00:24:40.546 "keep_alive_timeout_ms": 10000, 00:24:40.546 "arbitration_burst": 0, 00:24:40.546 "low_priority_weight": 0, 00:24:40.546 "medium_priority_weight": 0, 00:24:40.546 "high_priority_weight": 0, 00:24:40.546 "nvme_adminq_poll_period_us": 10000, 00:24:40.546 "nvme_ioq_poll_period_us": 0, 00:24:40.546 "io_queue_requests": 0, 00:24:40.546 "delay_cmd_submit": true, 00:24:40.546 "transport_retry_count": 4, 00:24:40.546 "bdev_retry_count": 3, 00:24:40.546 "transport_ack_timeout": 0, 00:24:40.546 "ctrlr_loss_timeout_sec": 0, 00:24:40.547 "reconnect_delay_sec": 0, 00:24:40.547 "fast_io_fail_timeout_sec": 0, 00:24:40.547 "disable_auto_failback": false, 00:24:40.547 "generate_uuids": false, 00:24:40.547 "transport_tos": 0, 00:24:40.547 "nvme_error_stat": false, 00:24:40.547 "rdma_srq_size": 0, 00:24:40.547 "io_path_stat": false, 00:24:40.547 "allow_accel_sequence": false, 00:24:40.547 "rdma_max_cq_size": 0, 00:24:40.547 "rdma_cm_event_timeout_ms": 0, 00:24:40.547 "dhchap_digests": [ 00:24:40.547 "sha256", 00:24:40.547 "sha384", 00:24:40.547 "sha512" 00:24:40.547 ], 00:24:40.547 "dhchap_dhgroups": [ 00:24:40.547 "null", 00:24:40.547 "ffdhe2048", 00:24:40.547 "ffdhe3072", 00:24:40.547 "ffdhe4096", 00:24:40.547 "ffdhe6144", 00:24:40.547 "ffdhe8192" 00:24:40.547 ] 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "bdev_nvme_set_hotplug", 00:24:40.547 "params": { 00:24:40.547 "period_us": 100000, 00:24:40.547 "enable": false 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "bdev_malloc_create", 00:24:40.547 "params": { 00:24:40.547 "name": "malloc0", 00:24:40.547 "num_blocks": 8192, 00:24:40.547 "block_size": 4096, 00:24:40.547 "physical_block_size": 4096, 00:24:40.547 "uuid": "1e13ee80-edca-405c-af03-88195b2b0b1d", 00:24:40.547 "optimal_io_boundary": 0, 00:24:40.547 "md_size": 0, 00:24:40.547 "dif_type": 0, 00:24:40.547 "dif_is_head_of_md": false, 00:24:40.547 "dif_pi_format": 0 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "bdev_wait_for_examine" 00:24:40.547 } 00:24:40.547 ] 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "subsystem": "nbd", 00:24:40.547 "config": [] 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "subsystem": "scheduler", 00:24:40.547 "config": [ 00:24:40.547 { 00:24:40.547 "method": "framework_set_scheduler", 00:24:40.547 "params": { 00:24:40.547 "name": "static" 00:24:40.547 } 00:24:40.547 } 00:24:40.547 ] 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "subsystem": "nvmf", 00:24:40.547 "config": [ 00:24:40.547 { 00:24:40.547 "method": "nvmf_set_config", 00:24:40.547 "params": { 00:24:40.547 "discovery_filter": "match_any", 00:24:40.547 "admin_cmd_passthru": { 00:24:40.547 "identify_ctrlr": false 00:24:40.547 }, 00:24:40.547 "dhchap_digests": [ 00:24:40.547 "sha256", 00:24:40.547 "sha384", 00:24:40.547 "sha512" 00:24:40.547 ], 00:24:40.547 "dhchap_dhgroups": [ 00:24:40.547 "null", 00:24:40.547 "ffdhe2048", 00:24:40.547 "ffdhe3072", 00:24:40.547 "ffdhe4096", 00:24:40.547 "ffdhe6144", 00:24:40.547 "ffdhe8192" 00:24:40.547 ] 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "nvmf_set_max_subsystems", 00:24:40.547 "params": { 00:24:40.547 "max_subsystems": 1024 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "nvmf_set_crdt", 00:24:40.547 "params": { 00:24:40.547 "crdt1": 0, 00:24:40.547 "crdt2": 0, 00:24:40.547 "crdt3": 0 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "nvmf_create_transport", 00:24:40.547 "params": { 00:24:40.547 "trtype": "TCP", 00:24:40.547 "max_queue_depth": 128, 00:24:40.547 "max_io_qpairs_per_ctrlr": 127, 00:24:40.547 "in_capsule_data_size": 4096, 00:24:40.547 "max_io_size": 131072, 00:24:40.547 "io_unit_size": 131072, 00:24:40.547 "max_aq_depth": 128, 00:24:40.547 "num_shared_buffers": 511, 00:24:40.547 "buf_cache_size": 4294967295, 00:24:40.547 "dif_insert_or_strip": false, 00:24:40.547 "zcopy": false, 00:24:40.547 "c2h_success": false, 00:24:40.547 "sock_priority": 0, 00:24:40.547 "abort_timeout_sec": 1, 00:24:40.547 "ack_timeout": 0, 00:24:40.547 "data_wr_pool_size": 0 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "nvmf_create_subsystem", 00:24:40.547 "params": { 00:24:40.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.547 "allow_any_host": false, 00:24:40.547 "serial_number": "SPDK00000000000001", 00:24:40.547 "model_number": "SPDK bdev Controller", 00:24:40.547 "max_namespaces": 10, 00:24:40.547 "min_cntlid": 1, 00:24:40.547 "max_cntlid": 65519, 00:24:40.547 "ana_reporting": false 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "nvmf_subsystem_add_host", 00:24:40.547 "params": { 00:24:40.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.547 "host": "nqn.2016-06.io.spdk:host1", 00:24:40.547 "psk": "key0" 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "nvmf_subsystem_add_ns", 00:24:40.547 "params": { 00:24:40.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.547 "namespace": { 00:24:40.547 "nsid": 1, 00:24:40.547 "bdev_name": "malloc0", 00:24:40.547 "nguid": "1E13EE80EDCA405CAF0388195B2B0B1D", 00:24:40.547 "uuid": "1e13ee80-edca-405c-af03-88195b2b0b1d", 00:24:40.547 "no_auto_visible": false 00:24:40.547 } 00:24:40.547 } 00:24:40.547 }, 00:24:40.547 { 00:24:40.547 "method": "nvmf_subsystem_add_listener", 00:24:40.547 "params": { 00:24:40.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.547 "listen_address": { 00:24:40.547 "trtype": "TCP", 00:24:40.547 "adrfam": "IPv4", 00:24:40.547 "traddr": "10.0.0.2", 00:24:40.547 "trsvcid": "4420" 00:24:40.547 }, 00:24:40.547 "secure_channel": true 00:24:40.547 } 00:24:40.547 } 00:24:40.547 ] 00:24:40.547 } 00:24:40.547 ] 00:24:40.547 }' 00:24:40.547 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:24:40.806 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:24:40.806 "subsystems": [ 00:24:40.806 { 00:24:40.806 "subsystem": "keyring", 00:24:40.806 "config": [ 00:24:40.806 { 00:24:40.806 "method": "keyring_file_add_key", 00:24:40.806 "params": { 00:24:40.806 "name": "key0", 00:24:40.806 "path": "/tmp/tmp.Um4T2NlSTs" 00:24:40.806 } 00:24:40.806 } 00:24:40.806 ] 00:24:40.806 }, 00:24:40.806 { 00:24:40.806 "subsystem": "iobuf", 00:24:40.806 "config": [ 00:24:40.806 { 00:24:40.806 "method": "iobuf_set_options", 00:24:40.806 "params": { 00:24:40.806 "small_pool_count": 8192, 00:24:40.806 "large_pool_count": 1024, 00:24:40.806 "small_bufsize": 8192, 00:24:40.806 "large_bufsize": 135168, 00:24:40.806 "enable_numa": false 00:24:40.806 } 00:24:40.806 } 00:24:40.806 ] 00:24:40.806 }, 00:24:40.806 { 00:24:40.806 "subsystem": "sock", 00:24:40.806 "config": [ 00:24:40.806 { 00:24:40.806 "method": "sock_set_default_impl", 00:24:40.806 "params": { 00:24:40.806 "impl_name": "posix" 00:24:40.806 } 00:24:40.806 }, 00:24:40.806 { 00:24:40.806 "method": "sock_impl_set_options", 00:24:40.806 "params": { 00:24:40.806 "impl_name": "ssl", 00:24:40.806 "recv_buf_size": 4096, 00:24:40.806 "send_buf_size": 4096, 00:24:40.806 "enable_recv_pipe": true, 00:24:40.806 "enable_quickack": false, 00:24:40.806 "enable_placement_id": 0, 00:24:40.806 "enable_zerocopy_send_server": true, 00:24:40.806 "enable_zerocopy_send_client": false, 00:24:40.806 "zerocopy_threshold": 0, 00:24:40.806 "tls_version": 0, 00:24:40.806 "enable_ktls": false 00:24:40.806 } 00:24:40.806 }, 00:24:40.806 { 00:24:40.806 "method": "sock_impl_set_options", 00:24:40.806 "params": { 00:24:40.806 "impl_name": "posix", 00:24:40.806 "recv_buf_size": 2097152, 00:24:40.806 "send_buf_size": 2097152, 00:24:40.806 "enable_recv_pipe": true, 00:24:40.806 "enable_quickack": false, 00:24:40.806 "enable_placement_id": 0, 00:24:40.806 "enable_zerocopy_send_server": true, 00:24:40.806 "enable_zerocopy_send_client": false, 00:24:40.806 "zerocopy_threshold": 0, 00:24:40.806 "tls_version": 0, 00:24:40.806 "enable_ktls": false 00:24:40.806 } 00:24:40.806 } 00:24:40.806 ] 00:24:40.806 }, 00:24:40.806 { 00:24:40.806 "subsystem": "vmd", 00:24:40.806 "config": [] 00:24:40.806 }, 00:24:40.806 { 00:24:40.806 "subsystem": "accel", 00:24:40.806 "config": [ 00:24:40.806 { 00:24:40.806 "method": "accel_set_options", 00:24:40.806 "params": { 00:24:40.806 "small_cache_size": 128, 00:24:40.806 "large_cache_size": 16, 00:24:40.806 "task_count": 2048, 00:24:40.806 "sequence_count": 2048, 00:24:40.806 "buf_count": 2048 00:24:40.806 } 00:24:40.806 } 00:24:40.807 ] 00:24:40.807 }, 00:24:40.807 { 00:24:40.807 "subsystem": "bdev", 00:24:40.807 "config": [ 00:24:40.807 { 00:24:40.807 "method": "bdev_set_options", 00:24:40.807 "params": { 00:24:40.807 "bdev_io_pool_size": 65535, 00:24:40.807 "bdev_io_cache_size": 256, 00:24:40.807 "bdev_auto_examine": true, 00:24:40.807 "iobuf_small_cache_size": 128, 00:24:40.807 "iobuf_large_cache_size": 16 00:24:40.807 } 00:24:40.807 }, 00:24:40.807 { 00:24:40.807 "method": "bdev_raid_set_options", 00:24:40.807 "params": { 00:24:40.807 "process_window_size_kb": 1024, 00:24:40.807 "process_max_bandwidth_mb_sec": 0 00:24:40.807 } 00:24:40.807 }, 00:24:40.807 { 00:24:40.807 "method": "bdev_iscsi_set_options", 00:24:40.807 "params": { 00:24:40.807 "timeout_sec": 30 00:24:40.807 } 00:24:40.807 }, 00:24:40.807 { 00:24:40.807 "method": "bdev_nvme_set_options", 00:24:40.807 "params": { 00:24:40.807 "action_on_timeout": "none", 00:24:40.807 "timeout_us": 0, 00:24:40.807 "timeout_admin_us": 0, 00:24:40.807 "keep_alive_timeout_ms": 10000, 00:24:40.807 "arbitration_burst": 0, 00:24:40.807 "low_priority_weight": 0, 00:24:40.807 "medium_priority_weight": 0, 00:24:40.807 "high_priority_weight": 0, 00:24:40.807 "nvme_adminq_poll_period_us": 10000, 00:24:40.807 "nvme_ioq_poll_period_us": 0, 00:24:40.807 "io_queue_requests": 512, 00:24:40.807 "delay_cmd_submit": true, 00:24:40.807 "transport_retry_count": 4, 00:24:40.807 "bdev_retry_count": 3, 00:24:40.807 "transport_ack_timeout": 0, 00:24:40.807 "ctrlr_loss_timeout_sec": 0, 00:24:40.807 "reconnect_delay_sec": 0, 00:24:40.807 "fast_io_fail_timeout_sec": 0, 00:24:40.807 "disable_auto_failback": false, 00:24:40.807 "generate_uuids": false, 00:24:40.807 "transport_tos": 0, 00:24:40.807 "nvme_error_stat": false, 00:24:40.807 "rdma_srq_size": 0, 00:24:40.807 "io_path_stat": false, 00:24:40.807 "allow_accel_sequence": false, 00:24:40.807 "rdma_max_cq_size": 0, 00:24:40.807 "rdma_cm_event_timeout_ms": 0, 00:24:40.807 "dhchap_digests": [ 00:24:40.807 "sha256", 00:24:40.807 "sha384", 00:24:40.807 "sha512" 00:24:40.807 ], 00:24:40.807 "dhchap_dhgroups": [ 00:24:40.807 "null", 00:24:40.807 "ffdhe2048", 00:24:40.807 "ffdhe3072", 00:24:40.807 "ffdhe4096", 00:24:40.807 "ffdhe6144", 00:24:40.807 "ffdhe8192" 00:24:40.807 ] 00:24:40.807 } 00:24:40.807 }, 00:24:40.807 { 00:24:40.807 "method": "bdev_nvme_attach_controller", 00:24:40.807 "params": { 00:24:40.807 "name": "TLSTEST", 00:24:40.807 "trtype": "TCP", 00:24:40.807 "adrfam": "IPv4", 00:24:40.807 "traddr": "10.0.0.2", 00:24:40.807 "trsvcid": "4420", 00:24:40.807 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:40.807 "prchk_reftag": false, 00:24:40.807 "prchk_guard": false, 00:24:40.807 "ctrlr_loss_timeout_sec": 0, 00:24:40.807 "reconnect_delay_sec": 0, 00:24:40.807 "fast_io_fail_timeout_sec": 0, 00:24:40.807 "psk": "key0", 00:24:40.807 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:40.807 "hdgst": false, 00:24:40.807 "ddgst": false, 00:24:40.807 "multipath": "multipath" 00:24:40.807 } 00:24:40.807 }, 00:24:40.807 { 00:24:40.807 "method": "bdev_nvme_set_hotplug", 00:24:40.807 "params": { 00:24:40.807 "period_us": 100000, 00:24:40.807 "enable": false 00:24:40.807 } 00:24:40.807 }, 00:24:40.807 { 00:24:40.807 "method": "bdev_wait_for_examine" 00:24:40.807 } 00:24:40.807 ] 00:24:40.807 }, 00:24:40.807 { 00:24:40.807 "subsystem": "nbd", 00:24:40.807 "config": [] 00:24:40.807 } 00:24:40.807 ] 00:24:40.807 }' 00:24:40.807 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 278016 00:24:40.807 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278016 ']' 00:24:40.807 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278016 00:24:40.807 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:40.807 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:40.807 16:30:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278016 00:24:40.807 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:40.807 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:40.807 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278016' 00:24:40.807 killing process with pid 278016 00:24:40.807 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278016 00:24:40.807 Received shutdown signal, test time was about 10.000000 seconds 00:24:40.807 00:24:40.807 Latency(us) 00:24:40.807 [2024-11-19T15:30:31.146Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:40.807 [2024-11-19T15:30:31.146Z] =================================================================================================================== 00:24:40.807 [2024-11-19T15:30:31.146Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:24:40.807 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278016 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 277725 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 277725 ']' 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 277725 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 277725 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 277725' 00:24:41.065 killing process with pid 277725 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 277725 00:24:41.065 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 277725 00:24:41.324 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:24:41.324 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:41.324 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:41.324 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:24:41.324 "subsystems": [ 00:24:41.324 { 00:24:41.324 "subsystem": "keyring", 00:24:41.324 "config": [ 00:24:41.324 { 00:24:41.324 "method": "keyring_file_add_key", 00:24:41.324 "params": { 00:24:41.324 "name": "key0", 00:24:41.324 "path": "/tmp/tmp.Um4T2NlSTs" 00:24:41.324 } 00:24:41.324 } 00:24:41.324 ] 00:24:41.324 }, 00:24:41.324 { 00:24:41.324 "subsystem": "iobuf", 00:24:41.324 "config": [ 00:24:41.324 { 00:24:41.324 "method": "iobuf_set_options", 00:24:41.324 "params": { 00:24:41.324 "small_pool_count": 8192, 00:24:41.324 "large_pool_count": 1024, 00:24:41.324 "small_bufsize": 8192, 00:24:41.324 "large_bufsize": 135168, 00:24:41.324 "enable_numa": false 00:24:41.324 } 00:24:41.324 } 00:24:41.324 ] 00:24:41.324 }, 00:24:41.324 { 00:24:41.324 "subsystem": "sock", 00:24:41.324 "config": [ 00:24:41.324 { 00:24:41.324 "method": "sock_set_default_impl", 00:24:41.324 "params": { 00:24:41.324 "impl_name": "posix" 00:24:41.324 } 00:24:41.324 }, 00:24:41.324 { 00:24:41.324 "method": "sock_impl_set_options", 00:24:41.324 "params": { 00:24:41.324 "impl_name": "ssl", 00:24:41.324 "recv_buf_size": 4096, 00:24:41.324 "send_buf_size": 4096, 00:24:41.324 "enable_recv_pipe": true, 00:24:41.324 "enable_quickack": false, 00:24:41.324 "enable_placement_id": 0, 00:24:41.324 "enable_zerocopy_send_server": true, 00:24:41.324 "enable_zerocopy_send_client": false, 00:24:41.324 "zerocopy_threshold": 0, 00:24:41.324 "tls_version": 0, 00:24:41.324 "enable_ktls": false 00:24:41.325 } 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "method": "sock_impl_set_options", 00:24:41.325 "params": { 00:24:41.325 "impl_name": "posix", 00:24:41.325 "recv_buf_size": 2097152, 00:24:41.325 "send_buf_size": 2097152, 00:24:41.325 "enable_recv_pipe": true, 00:24:41.325 "enable_quickack": false, 00:24:41.325 "enable_placement_id": 0, 00:24:41.325 "enable_zerocopy_send_server": true, 00:24:41.325 "enable_zerocopy_send_client": false, 00:24:41.325 "zerocopy_threshold": 0, 00:24:41.325 "tls_version": 0, 00:24:41.325 "enable_ktls": false 00:24:41.325 } 00:24:41.325 } 00:24:41.325 ] 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "subsystem": "vmd", 00:24:41.325 "config": [] 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "subsystem": "accel", 00:24:41.325 "config": [ 00:24:41.325 { 00:24:41.325 "method": "accel_set_options", 00:24:41.325 "params": { 00:24:41.325 "small_cache_size": 128, 00:24:41.325 "large_cache_size": 16, 00:24:41.325 "task_count": 2048, 00:24:41.325 "sequence_count": 2048, 00:24:41.325 "buf_count": 2048 00:24:41.325 } 00:24:41.325 } 00:24:41.325 ] 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "subsystem": "bdev", 00:24:41.325 "config": [ 00:24:41.325 { 00:24:41.325 "method": "bdev_set_options", 00:24:41.325 "params": { 00:24:41.325 "bdev_io_pool_size": 65535, 00:24:41.325 "bdev_io_cache_size": 256, 00:24:41.325 "bdev_auto_examine": true, 00:24:41.325 "iobuf_small_cache_size": 128, 00:24:41.325 "iobuf_large_cache_size": 16 00:24:41.325 } 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "method": "bdev_raid_set_options", 00:24:41.325 "params": { 00:24:41.325 "process_window_size_kb": 1024, 00:24:41.325 "process_max_bandwidth_mb_sec": 0 00:24:41.325 } 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "method": "bdev_iscsi_set_options", 00:24:41.325 "params": { 00:24:41.325 "timeout_sec": 30 00:24:41.325 } 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "method": "bdev_nvme_set_options", 00:24:41.325 "params": { 00:24:41.325 "action_on_timeout": "none", 00:24:41.325 "timeout_us": 0, 00:24:41.325 "timeout_admin_us": 0, 00:24:41.325 "keep_alive_timeout_ms": 10000, 00:24:41.325 "arbitration_burst": 0, 00:24:41.325 "low_priority_weight": 0, 00:24:41.325 "medium_priority_weight": 0, 00:24:41.325 "high_priority_weight": 0, 00:24:41.325 "nvme_adminq_poll_period_us": 10000, 00:24:41.325 "nvme_ioq_poll_period_us": 0, 00:24:41.325 "io_queue_requests": 0, 00:24:41.325 "delay_cmd_submit": true, 00:24:41.325 "transport_retry_count": 4, 00:24:41.325 "bdev_retry_count": 3, 00:24:41.325 "transport_ack_timeout": 0, 00:24:41.325 "ctrlr_loss_timeout_sec": 0, 00:24:41.325 "reconnect_delay_sec": 0, 00:24:41.325 "fast_io_fail_timeout_sec": 0, 00:24:41.325 "disable_auto_failback": false, 00:24:41.325 "generate_uuids": false, 00:24:41.325 "transport_tos": 0, 00:24:41.325 "nvme_error_stat": false, 00:24:41.325 "rdma_srq_size": 0, 00:24:41.325 "io_path_stat": false, 00:24:41.325 "allow_accel_sequence": false, 00:24:41.325 "rdma_max_cq_size": 0, 00:24:41.325 "rdma_cm_event_timeout_ms": 0, 00:24:41.325 "dhchap_digests": [ 00:24:41.325 "sha256", 00:24:41.325 "sha384", 00:24:41.325 "sha512" 00:24:41.325 ], 00:24:41.325 "dhchap_dhgroups": [ 00:24:41.325 "null", 00:24:41.325 "ffdhe2048", 00:24:41.325 "ffdhe3072", 00:24:41.325 "ffdhe4096", 00:24:41.325 "ffdhe6144", 00:24:41.325 "ffdhe8192" 00:24:41.325 ] 00:24:41.325 } 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "method": "bdev_nvme_set_hotplug", 00:24:41.325 "params": { 00:24:41.325 "period_us": 100000, 00:24:41.325 "enable": false 00:24:41.325 } 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "method": "bdev_malloc_create", 00:24:41.325 "params": { 00:24:41.325 "name": "malloc0", 00:24:41.325 "num_blocks": 8192, 00:24:41.325 "block_size": 4096, 00:24:41.325 "physical_block_size": 4096, 00:24:41.325 "uuid": "1e13ee80-edca-405c-af03-88195b2b0b1d", 00:24:41.325 "optimal_io_boundary": 0, 00:24:41.325 "md_size": 0, 00:24:41.325 "dif_type": 0, 00:24:41.325 "dif_is_head_of_md": false, 00:24:41.325 "dif_pi_format": 0 00:24:41.325 } 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "method": "bdev_wait_for_examine" 00:24:41.325 } 00:24:41.325 ] 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "subsystem": "nbd", 00:24:41.325 "config": [] 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "subsystem": "scheduler", 00:24:41.325 "config": [ 00:24:41.325 { 00:24:41.325 "method": "framework_set_scheduler", 00:24:41.325 "params": { 00:24:41.325 "name": "static" 00:24:41.325 } 00:24:41.325 } 00:24:41.325 ] 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "subsystem": "nvmf", 00:24:41.325 "config": [ 00:24:41.325 { 00:24:41.325 "method": "nvmf_set_config", 00:24:41.325 "params": { 00:24:41.325 "discovery_filter": "match_any", 00:24:41.325 "admin_cmd_passthru": { 00:24:41.325 "identify_ctrlr": false 00:24:41.325 }, 00:24:41.325 "dhchap_digests": [ 00:24:41.325 "sha256", 00:24:41.325 "sha384", 00:24:41.325 "sha512" 00:24:41.325 ], 00:24:41.325 "dhchap_dhgroups": [ 00:24:41.325 "null", 00:24:41.325 "ffdhe2048", 00:24:41.325 "ffdhe3072", 00:24:41.325 "ffdhe4096", 00:24:41.325 "ffdhe6144", 00:24:41.325 "ffdhe8192" 00:24:41.325 ] 00:24:41.325 } 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "method": "nvmf_set_max_subsystems", 00:24:41.325 "params": { 00:24:41.325 "max_subsystems": 1024 00:24:41.325 } 00:24:41.325 }, 00:24:41.325 { 00:24:41.325 "method": "nvmf_set_crdt", 00:24:41.325 "params": { 00:24:41.325 "crdt1": 0, 00:24:41.326 "crdt2": 0, 00:24:41.326 "crdt3": 0 00:24:41.326 } 00:24:41.326 }, 00:24:41.326 { 00:24:41.326 "method": "nvmf_create_transport", 00:24:41.326 "params": { 00:24:41.326 "trtype": "TCP", 00:24:41.326 "max_queue_depth": 128, 00:24:41.326 "max_io_qpairs_per_ctrlr": 127, 00:24:41.326 "in_capsule_data_size": 4096, 00:24:41.326 "max_io_size": 131072, 00:24:41.326 "io_unit_size": 131072, 00:24:41.326 "max_aq_depth": 128, 00:24:41.326 "num_shared_buffers": 511, 00:24:41.326 "buf_cache_size": 4294967295, 00:24:41.326 "dif_insert_or_strip": false, 00:24:41.326 "zcopy": false, 00:24:41.326 "c2h_success": false, 00:24:41.326 "sock_priority": 0, 00:24:41.326 "abort_timeout_sec": 1, 00:24:41.326 "ack_timeout": 0, 00:24:41.326 "data_wr_pool_size": 0 00:24:41.326 } 00:24:41.326 }, 00:24:41.326 { 00:24:41.326 "method": "nvmf_create_subsystem", 00:24:41.326 "params": { 00:24:41.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.326 "allow_any_host": false, 00:24:41.326 "serial_number": "SPDK00000000000001", 00:24:41.326 "model_number": "SPDK bdev Controller", 00:24:41.326 "max_namespaces": 10, 00:24:41.326 "min_cntlid": 1, 00:24:41.326 "max_cntlid": 65519, 00:24:41.326 "ana_reporting": false 00:24:41.326 } 00:24:41.326 }, 00:24:41.326 { 00:24:41.326 "method": "nvmf_subsystem_add_host", 00:24:41.326 "params": { 00:24:41.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.326 "host": "nqn.2016-06.io.spdk:host1", 00:24:41.326 "psk": "key0" 00:24:41.326 } 00:24:41.326 }, 00:24:41.326 { 00:24:41.326 "method": "nvmf_subsystem_add_ns", 00:24:41.326 "params": { 00:24:41.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.326 "namespace": { 00:24:41.326 "nsid": 1, 00:24:41.326 "bdev_name": "malloc0", 00:24:41.326 "nguid": "1E13EE80EDCA405CAF0388195B2B0B1D", 00:24:41.326 "uuid": "1e13ee80-edca-405c-af03-88195b2b0b1d", 00:24:41.326 "no_auto_visible": false 00:24:41.326 } 00:24:41.326 } 00:24:41.326 }, 00:24:41.326 { 00:24:41.326 "method": "nvmf_subsystem_add_listener", 00:24:41.326 "params": { 00:24:41.326 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:41.326 "listen_address": { 00:24:41.326 "trtype": "TCP", 00:24:41.326 "adrfam": "IPv4", 00:24:41.326 "traddr": "10.0.0.2", 00:24:41.326 "trsvcid": "4420" 00:24:41.326 }, 00:24:41.326 "secure_channel": true 00:24:41.326 } 00:24:41.326 } 00:24:41.326 ] 00:24:41.326 } 00:24:41.326 ] 00:24:41.326 }' 00:24:41.326 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.326 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=278297 00:24:41.326 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:24:41.326 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 278297 00:24:41.326 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278297 ']' 00:24:41.326 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.326 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:41.326 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.326 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:41.326 16:30:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:41.326 [2024-11-19 16:30:31.503649] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:41.326 [2024-11-19 16:30:31.503734] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.326 [2024-11-19 16:30:31.573894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.326 [2024-11-19 16:30:31.621259] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.326 [2024-11-19 16:30:31.621322] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.326 [2024-11-19 16:30:31.621336] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.326 [2024-11-19 16:30:31.621347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.326 [2024-11-19 16:30:31.621356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.326 [2024-11-19 16:30:31.621998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.584 [2024-11-19 16:30:31.860134] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:41.584 [2024-11-19 16:30:31.892163] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:41.584 [2024-11-19 16:30:31.892425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=278450 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 278450 /var/tmp/bdevperf.sock 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 278450 ']' 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.518 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:24:42.518 "subsystems": [ 00:24:42.518 { 00:24:42.518 "subsystem": "keyring", 00:24:42.518 "config": [ 00:24:42.518 { 00:24:42.518 "method": "keyring_file_add_key", 00:24:42.518 "params": { 00:24:42.518 "name": "key0", 00:24:42.518 "path": "/tmp/tmp.Um4T2NlSTs" 00:24:42.518 } 00:24:42.518 } 00:24:42.518 ] 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "subsystem": "iobuf", 00:24:42.518 "config": [ 00:24:42.518 { 00:24:42.518 "method": "iobuf_set_options", 00:24:42.518 "params": { 00:24:42.518 "small_pool_count": 8192, 00:24:42.518 "large_pool_count": 1024, 00:24:42.518 "small_bufsize": 8192, 00:24:42.518 "large_bufsize": 135168, 00:24:42.518 "enable_numa": false 00:24:42.518 } 00:24:42.518 } 00:24:42.518 ] 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "subsystem": "sock", 00:24:42.518 "config": [ 00:24:42.518 { 00:24:42.518 "method": "sock_set_default_impl", 00:24:42.518 "params": { 00:24:42.518 "impl_name": "posix" 00:24:42.518 } 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "method": "sock_impl_set_options", 00:24:42.518 "params": { 00:24:42.518 "impl_name": "ssl", 00:24:42.518 "recv_buf_size": 4096, 00:24:42.518 "send_buf_size": 4096, 00:24:42.518 "enable_recv_pipe": true, 00:24:42.518 "enable_quickack": false, 00:24:42.518 "enable_placement_id": 0, 00:24:42.518 "enable_zerocopy_send_server": true, 00:24:42.518 "enable_zerocopy_send_client": false, 00:24:42.518 "zerocopy_threshold": 0, 00:24:42.518 "tls_version": 0, 00:24:42.518 "enable_ktls": false 00:24:42.518 } 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "method": "sock_impl_set_options", 00:24:42.518 "params": { 00:24:42.518 "impl_name": "posix", 00:24:42.518 "recv_buf_size": 2097152, 00:24:42.518 "send_buf_size": 2097152, 00:24:42.518 "enable_recv_pipe": true, 00:24:42.518 "enable_quickack": false, 00:24:42.518 "enable_placement_id": 0, 00:24:42.518 "enable_zerocopy_send_server": true, 00:24:42.518 "enable_zerocopy_send_client": false, 00:24:42.518 "zerocopy_threshold": 0, 00:24:42.518 "tls_version": 0, 00:24:42.518 "enable_ktls": false 00:24:42.518 } 00:24:42.518 } 00:24:42.518 ] 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "subsystem": "vmd", 00:24:42.518 "config": [] 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "subsystem": "accel", 00:24:42.518 "config": [ 00:24:42.518 { 00:24:42.518 "method": "accel_set_options", 00:24:42.518 "params": { 00:24:42.518 "small_cache_size": 128, 00:24:42.518 "large_cache_size": 16, 00:24:42.518 "task_count": 2048, 00:24:42.518 "sequence_count": 2048, 00:24:42.518 "buf_count": 2048 00:24:42.518 } 00:24:42.518 } 00:24:42.518 ] 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "subsystem": "bdev", 00:24:42.518 "config": [ 00:24:42.518 { 00:24:42.518 "method": "bdev_set_options", 00:24:42.518 "params": { 00:24:42.518 "bdev_io_pool_size": 65535, 00:24:42.518 "bdev_io_cache_size": 256, 00:24:42.518 "bdev_auto_examine": true, 00:24:42.518 "iobuf_small_cache_size": 128, 00:24:42.518 "iobuf_large_cache_size": 16 00:24:42.518 } 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "method": "bdev_raid_set_options", 00:24:42.518 "params": { 00:24:42.518 "process_window_size_kb": 1024, 00:24:42.518 "process_max_bandwidth_mb_sec": 0 00:24:42.518 } 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "method": "bdev_iscsi_set_options", 00:24:42.518 "params": { 00:24:42.518 "timeout_sec": 30 00:24:42.518 } 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "method": "bdev_nvme_set_options", 00:24:42.518 "params": { 00:24:42.518 "action_on_timeout": "none", 00:24:42.518 "timeout_us": 0, 00:24:42.518 "timeout_admin_us": 0, 00:24:42.518 "keep_alive_timeout_ms": 10000, 00:24:42.518 "arbitration_burst": 0, 00:24:42.518 "low_priority_weight": 0, 00:24:42.518 "medium_priority_weight": 0, 00:24:42.518 "high_priority_weight": 0, 00:24:42.518 "nvme_adminq_poll_period_us": 10000, 00:24:42.518 "nvme_ioq_poll_period_us": 0, 00:24:42.518 "io_queue_requests": 512, 00:24:42.518 "delay_cmd_submit": true, 00:24:42.518 "transport_retry_count": 4, 00:24:42.518 "bdev_retry_count": 3, 00:24:42.518 "transport_ack_timeout": 0, 00:24:42.518 "ctrlr_loss_timeout_sec": 0, 00:24:42.518 "reconnect_delay_sec": 0, 00:24:42.518 "fast_io_fail_timeout_sec": 0, 00:24:42.518 "disable_auto_failback": false, 00:24:42.518 "generate_uuids": false, 00:24:42.518 "transport_tos": 0, 00:24:42.518 "nvme_error_stat": false, 00:24:42.518 "rdma_srq_size": 0, 00:24:42.518 "io_path_stat": false, 00:24:42.518 "allow_accel_sequence": false, 00:24:42.518 "rdma_max_cq_size": 0, 00:24:42.518 "rdma_cm_event_timeout_ms": 0, 00:24:42.518 "dhchap_digests": [ 00:24:42.518 "sha256", 00:24:42.518 "sha384", 00:24:42.518 "sha512" 00:24:42.518 ], 00:24:42.518 "dhchap_dhgroups": [ 00:24:42.518 "null", 00:24:42.518 "ffdhe2048", 00:24:42.518 "ffdhe3072", 00:24:42.518 "ffdhe4096", 00:24:42.518 "ffdhe6144", 00:24:42.518 "ffdhe8192" 00:24:42.518 ] 00:24:42.518 } 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "method": "bdev_nvme_attach_controller", 00:24:42.518 "params": { 00:24:42.518 "name": "TLSTEST", 00:24:42.518 "trtype": "TCP", 00:24:42.518 "adrfam": "IPv4", 00:24:42.518 "traddr": "10.0.0.2", 00:24:42.518 "trsvcid": "4420", 00:24:42.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:42.518 "prchk_reftag": false, 00:24:42.518 "prchk_guard": false, 00:24:42.518 "ctrlr_loss_timeout_sec": 0, 00:24:42.518 "reconnect_delay_sec": 0, 00:24:42.518 "fast_io_fail_timeout_sec": 0, 00:24:42.518 "psk": "key0", 00:24:42.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:42.518 "hdgst": false, 00:24:42.518 "ddgst": false, 00:24:42.518 "multipath": "multipath" 00:24:42.518 } 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "method": "bdev_nvme_set_hotplug", 00:24:42.518 "params": { 00:24:42.518 "period_us": 100000, 00:24:42.518 "enable": false 00:24:42.518 } 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "method": "bdev_wait_for_examine" 00:24:42.518 } 00:24:42.518 ] 00:24:42.518 }, 00:24:42.518 { 00:24:42.518 "subsystem": "nbd", 00:24:42.518 "config": [] 00:24:42.518 } 00:24:42.518 ] 00:24:42.518 }' 00:24:42.519 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:42.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:42.519 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.519 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:42.519 [2024-11-19 16:30:32.585969] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:42.519 [2024-11-19 16:30:32.586063] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid278450 ] 00:24:42.519 [2024-11-19 16:30:32.652331] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.519 [2024-11-19 16:30:32.698333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:42.777 [2024-11-19 16:30:32.877691] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:42.777 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:42.777 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:42.777 16:30:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:43.034 Running I/O for 10 seconds... 00:24:44.901 3180.00 IOPS, 12.42 MiB/s [2024-11-19T15:30:36.173Z] 3202.00 IOPS, 12.51 MiB/s [2024-11-19T15:30:37.546Z] 3261.00 IOPS, 12.74 MiB/s [2024-11-19T15:30:38.479Z] 3255.50 IOPS, 12.72 MiB/s [2024-11-19T15:30:39.411Z] 3244.20 IOPS, 12.67 MiB/s [2024-11-19T15:30:40.342Z] 3264.33 IOPS, 12.75 MiB/s [2024-11-19T15:30:41.275Z] 3259.71 IOPS, 12.73 MiB/s [2024-11-19T15:30:42.208Z] 3252.88 IOPS, 12.71 MiB/s [2024-11-19T15:30:43.581Z] 3268.44 IOPS, 12.77 MiB/s [2024-11-19T15:30:43.581Z] 3278.80 IOPS, 12.81 MiB/s 00:24:53.242 Latency(us) 00:24:53.242 [2024-11-19T15:30:43.581Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.242 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:53.242 Verification LBA range: start 0x0 length 0x2000 00:24:53.242 TLSTESTn1 : 10.02 3283.87 12.83 0.00 0.00 38907.64 8495.41 54370.61 00:24:53.242 [2024-11-19T15:30:43.581Z] =================================================================================================================== 00:24:53.242 [2024-11-19T15:30:43.581Z] Total : 3283.87 12.83 0.00 0.00 38907.64 8495.41 54370.61 00:24:53.242 { 00:24:53.242 "results": [ 00:24:53.242 { 00:24:53.242 "job": "TLSTESTn1", 00:24:53.242 "core_mask": "0x4", 00:24:53.242 "workload": "verify", 00:24:53.242 "status": "finished", 00:24:53.242 "verify_range": { 00:24:53.242 "start": 0, 00:24:53.242 "length": 8192 00:24:53.242 }, 00:24:53.242 "queue_depth": 128, 00:24:53.242 "io_size": 4096, 00:24:53.242 "runtime": 10.022929, 00:24:53.242 "iops": 3283.87041352882, 00:24:53.242 "mibps": 12.827618802846953, 00:24:53.242 "io_failed": 0, 00:24:53.242 "io_timeout": 0, 00:24:53.242 "avg_latency_us": 38907.639981905704, 00:24:53.242 "min_latency_us": 8495.407407407407, 00:24:53.242 "max_latency_us": 54370.607407407406 00:24:53.242 } 00:24:53.242 ], 00:24:53.242 "core_count": 1 00:24:53.243 } 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 278450 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278450 ']' 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278450 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278450 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278450' 00:24:53.243 killing process with pid 278450 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278450 00:24:53.243 Received shutdown signal, test time was about 10.000000 seconds 00:24:53.243 00:24:53.243 Latency(us) 00:24:53.243 [2024-11-19T15:30:43.582Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:53.243 [2024-11-19T15:30:43.582Z] =================================================================================================================== 00:24:53.243 [2024-11-19T15:30:43.582Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278450 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 278297 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 278297 ']' 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 278297 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 278297 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 278297' 00:24:53.243 killing process with pid 278297 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 278297 00:24:53.243 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 278297 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=279667 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 279667 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 279667 ']' 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:53.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:53.501 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.501 [2024-11-19 16:30:43.721030] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:53.501 [2024-11-19 16:30:43.721148] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:53.501 [2024-11-19 16:30:43.798678] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:53.759 [2024-11-19 16:30:43.844527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:53.759 [2024-11-19 16:30:43.844569] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:53.759 [2024-11-19 16:30:43.844605] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:53.759 [2024-11-19 16:30:43.844616] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:53.759 [2024-11-19 16:30:43.844625] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:53.759 [2024-11-19 16:30:43.845209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:53.759 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.759 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:53.759 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:53.759 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:53.759 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:53.759 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:53.759 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Um4T2NlSTs 00:24:53.759 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Um4T2NlSTs 00:24:53.759 16:30:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:54.017 [2024-11-19 16:30:44.226610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.017 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:54.275 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:54.533 [2024-11-19 16:30:44.764065] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:54.533 [2024-11-19 16:30:44.764335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.533 16:30:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:54.791 malloc0 00:24:54.791 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:55.048 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Um4T2NlSTs 00:24:55.614 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:24:55.873 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=279986 00:24:55.873 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:55.873 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:55.873 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 279986 /var/tmp/bdevperf.sock 00:24:55.873 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 279986 ']' 00:24:55.873 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:55.873 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:55.873 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:55.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:55.873 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:55.873 16:30:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:55.873 [2024-11-19 16:30:46.026836] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:55.873 [2024-11-19 16:30:46.026930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid279986 ] 00:24:55.873 [2024-11-19 16:30:46.097386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.873 [2024-11-19 16:30:46.142444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.130 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:56.130 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:56.130 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Um4T2NlSTs 00:24:56.388 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:56.646 [2024-11-19 16:30:46.766109] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:56.646 nvme0n1 00:24:56.646 16:30:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:56.646 Running I/O for 1 seconds... 00:24:58.018 2965.00 IOPS, 11.58 MiB/s 00:24:58.018 Latency(us) 00:24:58.018 [2024-11-19T15:30:48.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.018 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:58.018 Verification LBA range: start 0x0 length 0x2000 00:24:58.018 nvme0n1 : 1.02 3020.40 11.80 0.00 0.00 41984.46 7767.23 33981.63 00:24:58.018 [2024-11-19T15:30:48.357Z] =================================================================================================================== 00:24:58.018 [2024-11-19T15:30:48.357Z] Total : 3020.40 11.80 0.00 0.00 41984.46 7767.23 33981.63 00:24:58.018 { 00:24:58.018 "results": [ 00:24:58.018 { 00:24:58.018 "job": "nvme0n1", 00:24:58.018 "core_mask": "0x2", 00:24:58.018 "workload": "verify", 00:24:58.018 "status": "finished", 00:24:58.018 "verify_range": { 00:24:58.018 "start": 0, 00:24:58.018 "length": 8192 00:24:58.018 }, 00:24:58.018 "queue_depth": 128, 00:24:58.018 "io_size": 4096, 00:24:58.018 "runtime": 1.024038, 00:24:58.018 "iops": 3020.3957275023, 00:24:58.018 "mibps": 11.798420810555859, 00:24:58.018 "io_failed": 0, 00:24:58.018 "io_timeout": 0, 00:24:58.018 "avg_latency_us": 41984.45957442732, 00:24:58.018 "min_latency_us": 7767.22962962963, 00:24:58.018 "max_latency_us": 33981.62962962963 00:24:58.018 } 00:24:58.018 ], 00:24:58.018 "core_count": 1 00:24:58.018 } 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 279986 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 279986 ']' 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 279986 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279986 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279986' 00:24:58.018 killing process with pid 279986 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 279986 00:24:58.018 Received shutdown signal, test time was about 1.000000 seconds 00:24:58.018 00:24:58.018 Latency(us) 00:24:58.018 [2024-11-19T15:30:48.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.018 [2024-11-19T15:30:48.357Z] =================================================================================================================== 00:24:58.018 [2024-11-19T15:30:48.357Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 279986 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 279667 00:24:58.018 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 279667 ']' 00:24:58.019 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 279667 00:24:58.019 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:58.019 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:58.019 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279667 00:24:58.019 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:58.019 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:58.019 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279667' 00:24:58.019 killing process with pid 279667 00:24:58.019 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 279667 00:24:58.019 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 279667 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=280329 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 280329 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280329 ']' 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.277 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.277 [2024-11-19 16:30:48.570398] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:58.277 [2024-11-19 16:30:48.570501] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.535 [2024-11-19 16:30:48.642076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.535 [2024-11-19 16:30:48.683835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.535 [2024-11-19 16:30:48.683909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.535 [2024-11-19 16:30:48.683932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.535 [2024-11-19 16:30:48.683942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.535 [2024-11-19 16:30:48.683951] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.535 [2024-11-19 16:30:48.684541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.535 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:58.535 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:58.535 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:58.535 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:58.535 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.535 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:58.535 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:24:58.535 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.535 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.535 [2024-11-19 16:30:48.822667] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:58.535 malloc0 00:24:58.535 [2024-11-19 16:30:48.853852] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:58.535 [2024-11-19 16:30:48.854171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:58.793 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.793 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=280361 00:24:58.793 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 280361 /var/tmp/bdevperf.sock 00:24:58.793 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280361 ']' 00:24:58.794 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:24:58.794 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:58.794 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.794 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:58.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:58.794 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.794 16:30:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:58.794 [2024-11-19 16:30:48.926543] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:24:58.794 [2024-11-19 16:30:48.926631] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280361 ] 00:24:58.794 [2024-11-19 16:30:48.992601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.794 [2024-11-19 16:30:49.039712] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:59.050 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.050 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:24:59.050 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Um4T2NlSTs 00:24:59.307 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:24:59.565 [2024-11-19 16:30:49.668011] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:59.565 nvme0n1 00:24:59.565 16:30:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:59.565 Running I/O for 1 seconds... 00:25:00.939 3211.00 IOPS, 12.54 MiB/s 00:25:00.939 Latency(us) 00:25:00.939 [2024-11-19T15:30:51.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.939 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:00.939 Verification LBA range: start 0x0 length 0x2000 00:25:00.939 nvme0n1 : 1.02 3258.18 12.73 0.00 0.00 38795.45 6553.60 45244.11 00:25:00.939 [2024-11-19T15:30:51.278Z] =================================================================================================================== 00:25:00.939 [2024-11-19T15:30:51.278Z] Total : 3258.18 12.73 0.00 0.00 38795.45 6553.60 45244.11 00:25:00.939 { 00:25:00.939 "results": [ 00:25:00.939 { 00:25:00.939 "job": "nvme0n1", 00:25:00.939 "core_mask": "0x2", 00:25:00.939 "workload": "verify", 00:25:00.939 "status": "finished", 00:25:00.939 "verify_range": { 00:25:00.939 "start": 0, 00:25:00.939 "length": 8192 00:25:00.939 }, 00:25:00.939 "queue_depth": 128, 00:25:00.939 "io_size": 4096, 00:25:00.939 "runtime": 1.024805, 00:25:00.939 "iops": 3258.1808246446885, 00:25:00.939 "mibps": 12.727268846268315, 00:25:00.939 "io_failed": 0, 00:25:00.939 "io_timeout": 0, 00:25:00.939 "avg_latency_us": 38795.45393630827, 00:25:00.939 "min_latency_us": 6553.6, 00:25:00.939 "max_latency_us": 45244.112592592595 00:25:00.939 } 00:25:00.939 ], 00:25:00.939 "core_count": 1 00:25:00.939 } 00:25:00.939 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:25:00.939 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.939 16:30:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:00.939 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.939 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:25:00.939 "subsystems": [ 00:25:00.939 { 00:25:00.939 "subsystem": "keyring", 00:25:00.939 "config": [ 00:25:00.939 { 00:25:00.939 "method": "keyring_file_add_key", 00:25:00.939 "params": { 00:25:00.939 "name": "key0", 00:25:00.939 "path": "/tmp/tmp.Um4T2NlSTs" 00:25:00.939 } 00:25:00.939 } 00:25:00.939 ] 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "subsystem": "iobuf", 00:25:00.939 "config": [ 00:25:00.939 { 00:25:00.939 "method": "iobuf_set_options", 00:25:00.939 "params": { 00:25:00.939 "small_pool_count": 8192, 00:25:00.939 "large_pool_count": 1024, 00:25:00.939 "small_bufsize": 8192, 00:25:00.939 "large_bufsize": 135168, 00:25:00.939 "enable_numa": false 00:25:00.939 } 00:25:00.939 } 00:25:00.939 ] 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "subsystem": "sock", 00:25:00.939 "config": [ 00:25:00.939 { 00:25:00.939 "method": "sock_set_default_impl", 00:25:00.939 "params": { 00:25:00.939 "impl_name": "posix" 00:25:00.939 } 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "method": "sock_impl_set_options", 00:25:00.939 "params": { 00:25:00.939 "impl_name": "ssl", 00:25:00.939 "recv_buf_size": 4096, 00:25:00.939 "send_buf_size": 4096, 00:25:00.939 "enable_recv_pipe": true, 00:25:00.939 "enable_quickack": false, 00:25:00.939 "enable_placement_id": 0, 00:25:00.939 "enable_zerocopy_send_server": true, 00:25:00.939 "enable_zerocopy_send_client": false, 00:25:00.939 "zerocopy_threshold": 0, 00:25:00.939 "tls_version": 0, 00:25:00.939 "enable_ktls": false 00:25:00.939 } 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "method": "sock_impl_set_options", 00:25:00.939 "params": { 00:25:00.939 "impl_name": "posix", 00:25:00.939 "recv_buf_size": 2097152, 00:25:00.939 "send_buf_size": 2097152, 00:25:00.939 "enable_recv_pipe": true, 00:25:00.939 "enable_quickack": false, 00:25:00.939 "enable_placement_id": 0, 00:25:00.939 "enable_zerocopy_send_server": true, 00:25:00.939 "enable_zerocopy_send_client": false, 00:25:00.939 "zerocopy_threshold": 0, 00:25:00.939 "tls_version": 0, 00:25:00.939 "enable_ktls": false 00:25:00.939 } 00:25:00.939 } 00:25:00.939 ] 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "subsystem": "vmd", 00:25:00.939 "config": [] 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "subsystem": "accel", 00:25:00.939 "config": [ 00:25:00.939 { 00:25:00.939 "method": "accel_set_options", 00:25:00.939 "params": { 00:25:00.939 "small_cache_size": 128, 00:25:00.939 "large_cache_size": 16, 00:25:00.939 "task_count": 2048, 00:25:00.939 "sequence_count": 2048, 00:25:00.939 "buf_count": 2048 00:25:00.939 } 00:25:00.939 } 00:25:00.939 ] 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "subsystem": "bdev", 00:25:00.939 "config": [ 00:25:00.939 { 00:25:00.939 "method": "bdev_set_options", 00:25:00.939 "params": { 00:25:00.939 "bdev_io_pool_size": 65535, 00:25:00.939 "bdev_io_cache_size": 256, 00:25:00.939 "bdev_auto_examine": true, 00:25:00.939 "iobuf_small_cache_size": 128, 00:25:00.939 "iobuf_large_cache_size": 16 00:25:00.939 } 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "method": "bdev_raid_set_options", 00:25:00.939 "params": { 00:25:00.939 "process_window_size_kb": 1024, 00:25:00.939 "process_max_bandwidth_mb_sec": 0 00:25:00.939 } 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "method": "bdev_iscsi_set_options", 00:25:00.939 "params": { 00:25:00.939 "timeout_sec": 30 00:25:00.939 } 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "method": "bdev_nvme_set_options", 00:25:00.939 "params": { 00:25:00.939 "action_on_timeout": "none", 00:25:00.939 "timeout_us": 0, 00:25:00.939 "timeout_admin_us": 0, 00:25:00.939 "keep_alive_timeout_ms": 10000, 00:25:00.939 "arbitration_burst": 0, 00:25:00.939 "low_priority_weight": 0, 00:25:00.939 "medium_priority_weight": 0, 00:25:00.939 "high_priority_weight": 0, 00:25:00.939 "nvme_adminq_poll_period_us": 10000, 00:25:00.939 "nvme_ioq_poll_period_us": 0, 00:25:00.939 "io_queue_requests": 0, 00:25:00.939 "delay_cmd_submit": true, 00:25:00.939 "transport_retry_count": 4, 00:25:00.939 "bdev_retry_count": 3, 00:25:00.939 "transport_ack_timeout": 0, 00:25:00.939 "ctrlr_loss_timeout_sec": 0, 00:25:00.939 "reconnect_delay_sec": 0, 00:25:00.939 "fast_io_fail_timeout_sec": 0, 00:25:00.939 "disable_auto_failback": false, 00:25:00.939 "generate_uuids": false, 00:25:00.939 "transport_tos": 0, 00:25:00.939 "nvme_error_stat": false, 00:25:00.939 "rdma_srq_size": 0, 00:25:00.939 "io_path_stat": false, 00:25:00.939 "allow_accel_sequence": false, 00:25:00.939 "rdma_max_cq_size": 0, 00:25:00.939 "rdma_cm_event_timeout_ms": 0, 00:25:00.939 "dhchap_digests": [ 00:25:00.939 "sha256", 00:25:00.939 "sha384", 00:25:00.939 "sha512" 00:25:00.939 ], 00:25:00.939 "dhchap_dhgroups": [ 00:25:00.939 "null", 00:25:00.939 "ffdhe2048", 00:25:00.939 "ffdhe3072", 00:25:00.939 "ffdhe4096", 00:25:00.939 "ffdhe6144", 00:25:00.939 "ffdhe8192" 00:25:00.939 ] 00:25:00.939 } 00:25:00.939 }, 00:25:00.939 { 00:25:00.939 "method": "bdev_nvme_set_hotplug", 00:25:00.939 "params": { 00:25:00.939 "period_us": 100000, 00:25:00.939 "enable": false 00:25:00.939 } 00:25:00.939 }, 00:25:00.939 { 00:25:00.940 "method": "bdev_malloc_create", 00:25:00.940 "params": { 00:25:00.940 "name": "malloc0", 00:25:00.940 "num_blocks": 8192, 00:25:00.940 "block_size": 4096, 00:25:00.940 "physical_block_size": 4096, 00:25:00.940 "uuid": "9fbd423a-cb12-40df-8210-d79f326641c1", 00:25:00.940 "optimal_io_boundary": 0, 00:25:00.940 "md_size": 0, 00:25:00.940 "dif_type": 0, 00:25:00.940 "dif_is_head_of_md": false, 00:25:00.940 "dif_pi_format": 0 00:25:00.940 } 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "method": "bdev_wait_for_examine" 00:25:00.940 } 00:25:00.940 ] 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "subsystem": "nbd", 00:25:00.940 "config": [] 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "subsystem": "scheduler", 00:25:00.940 "config": [ 00:25:00.940 { 00:25:00.940 "method": "framework_set_scheduler", 00:25:00.940 "params": { 00:25:00.940 "name": "static" 00:25:00.940 } 00:25:00.940 } 00:25:00.940 ] 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "subsystem": "nvmf", 00:25:00.940 "config": [ 00:25:00.940 { 00:25:00.940 "method": "nvmf_set_config", 00:25:00.940 "params": { 00:25:00.940 "discovery_filter": "match_any", 00:25:00.940 "admin_cmd_passthru": { 00:25:00.940 "identify_ctrlr": false 00:25:00.940 }, 00:25:00.940 "dhchap_digests": [ 00:25:00.940 "sha256", 00:25:00.940 "sha384", 00:25:00.940 "sha512" 00:25:00.940 ], 00:25:00.940 "dhchap_dhgroups": [ 00:25:00.940 "null", 00:25:00.940 "ffdhe2048", 00:25:00.940 "ffdhe3072", 00:25:00.940 "ffdhe4096", 00:25:00.940 "ffdhe6144", 00:25:00.940 "ffdhe8192" 00:25:00.940 ] 00:25:00.940 } 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "method": "nvmf_set_max_subsystems", 00:25:00.940 "params": { 00:25:00.940 "max_subsystems": 1024 00:25:00.940 } 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "method": "nvmf_set_crdt", 00:25:00.940 "params": { 00:25:00.940 "crdt1": 0, 00:25:00.940 "crdt2": 0, 00:25:00.940 "crdt3": 0 00:25:00.940 } 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "method": "nvmf_create_transport", 00:25:00.940 "params": { 00:25:00.940 "trtype": "TCP", 00:25:00.940 "max_queue_depth": 128, 00:25:00.940 "max_io_qpairs_per_ctrlr": 127, 00:25:00.940 "in_capsule_data_size": 4096, 00:25:00.940 "max_io_size": 131072, 00:25:00.940 "io_unit_size": 131072, 00:25:00.940 "max_aq_depth": 128, 00:25:00.940 "num_shared_buffers": 511, 00:25:00.940 "buf_cache_size": 4294967295, 00:25:00.940 "dif_insert_or_strip": false, 00:25:00.940 "zcopy": false, 00:25:00.940 "c2h_success": false, 00:25:00.940 "sock_priority": 0, 00:25:00.940 "abort_timeout_sec": 1, 00:25:00.940 "ack_timeout": 0, 00:25:00.940 "data_wr_pool_size": 0 00:25:00.940 } 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "method": "nvmf_create_subsystem", 00:25:00.940 "params": { 00:25:00.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.940 "allow_any_host": false, 00:25:00.940 "serial_number": "00000000000000000000", 00:25:00.940 "model_number": "SPDK bdev Controller", 00:25:00.940 "max_namespaces": 32, 00:25:00.940 "min_cntlid": 1, 00:25:00.940 "max_cntlid": 65519, 00:25:00.940 "ana_reporting": false 00:25:00.940 } 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "method": "nvmf_subsystem_add_host", 00:25:00.940 "params": { 00:25:00.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.940 "host": "nqn.2016-06.io.spdk:host1", 00:25:00.940 "psk": "key0" 00:25:00.940 } 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "method": "nvmf_subsystem_add_ns", 00:25:00.940 "params": { 00:25:00.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.940 "namespace": { 00:25:00.940 "nsid": 1, 00:25:00.940 "bdev_name": "malloc0", 00:25:00.940 "nguid": "9FBD423ACB1240DF8210D79F326641C1", 00:25:00.940 "uuid": "9fbd423a-cb12-40df-8210-d79f326641c1", 00:25:00.940 "no_auto_visible": false 00:25:00.940 } 00:25:00.940 } 00:25:00.940 }, 00:25:00.940 { 00:25:00.940 "method": "nvmf_subsystem_add_listener", 00:25:00.940 "params": { 00:25:00.940 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.940 "listen_address": { 00:25:00.940 "trtype": "TCP", 00:25:00.940 "adrfam": "IPv4", 00:25:00.940 "traddr": "10.0.0.2", 00:25:00.940 "trsvcid": "4420" 00:25:00.940 }, 00:25:00.940 "secure_channel": false, 00:25:00.940 "sock_impl": "ssl" 00:25:00.940 } 00:25:00.940 } 00:25:00.940 ] 00:25:00.940 } 00:25:00.940 ] 00:25:00.940 }' 00:25:00.940 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:01.199 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:25:01.199 "subsystems": [ 00:25:01.199 { 00:25:01.199 "subsystem": "keyring", 00:25:01.199 "config": [ 00:25:01.199 { 00:25:01.199 "method": "keyring_file_add_key", 00:25:01.199 "params": { 00:25:01.199 "name": "key0", 00:25:01.199 "path": "/tmp/tmp.Um4T2NlSTs" 00:25:01.199 } 00:25:01.199 } 00:25:01.199 ] 00:25:01.199 }, 00:25:01.199 { 00:25:01.199 "subsystem": "iobuf", 00:25:01.199 "config": [ 00:25:01.199 { 00:25:01.199 "method": "iobuf_set_options", 00:25:01.199 "params": { 00:25:01.199 "small_pool_count": 8192, 00:25:01.199 "large_pool_count": 1024, 00:25:01.199 "small_bufsize": 8192, 00:25:01.199 "large_bufsize": 135168, 00:25:01.199 "enable_numa": false 00:25:01.199 } 00:25:01.199 } 00:25:01.199 ] 00:25:01.199 }, 00:25:01.199 { 00:25:01.199 "subsystem": "sock", 00:25:01.199 "config": [ 00:25:01.199 { 00:25:01.199 "method": "sock_set_default_impl", 00:25:01.199 "params": { 00:25:01.199 "impl_name": "posix" 00:25:01.199 } 00:25:01.199 }, 00:25:01.199 { 00:25:01.199 "method": "sock_impl_set_options", 00:25:01.199 "params": { 00:25:01.199 "impl_name": "ssl", 00:25:01.199 "recv_buf_size": 4096, 00:25:01.199 "send_buf_size": 4096, 00:25:01.199 "enable_recv_pipe": true, 00:25:01.199 "enable_quickack": false, 00:25:01.199 "enable_placement_id": 0, 00:25:01.199 "enable_zerocopy_send_server": true, 00:25:01.199 "enable_zerocopy_send_client": false, 00:25:01.199 "zerocopy_threshold": 0, 00:25:01.199 "tls_version": 0, 00:25:01.199 "enable_ktls": false 00:25:01.199 } 00:25:01.199 }, 00:25:01.199 { 00:25:01.199 "method": "sock_impl_set_options", 00:25:01.199 "params": { 00:25:01.199 "impl_name": "posix", 00:25:01.199 "recv_buf_size": 2097152, 00:25:01.199 "send_buf_size": 2097152, 00:25:01.199 "enable_recv_pipe": true, 00:25:01.199 "enable_quickack": false, 00:25:01.199 "enable_placement_id": 0, 00:25:01.199 "enable_zerocopy_send_server": true, 00:25:01.199 "enable_zerocopy_send_client": false, 00:25:01.199 "zerocopy_threshold": 0, 00:25:01.199 "tls_version": 0, 00:25:01.199 "enable_ktls": false 00:25:01.199 } 00:25:01.199 } 00:25:01.199 ] 00:25:01.199 }, 00:25:01.199 { 00:25:01.199 "subsystem": "vmd", 00:25:01.199 "config": [] 00:25:01.199 }, 00:25:01.199 { 00:25:01.199 "subsystem": "accel", 00:25:01.199 "config": [ 00:25:01.199 { 00:25:01.199 "method": "accel_set_options", 00:25:01.199 "params": { 00:25:01.199 "small_cache_size": 128, 00:25:01.199 "large_cache_size": 16, 00:25:01.199 "task_count": 2048, 00:25:01.199 "sequence_count": 2048, 00:25:01.199 "buf_count": 2048 00:25:01.199 } 00:25:01.199 } 00:25:01.199 ] 00:25:01.199 }, 00:25:01.199 { 00:25:01.199 "subsystem": "bdev", 00:25:01.199 "config": [ 00:25:01.199 { 00:25:01.199 "method": "bdev_set_options", 00:25:01.199 "params": { 00:25:01.199 "bdev_io_pool_size": 65535, 00:25:01.199 "bdev_io_cache_size": 256, 00:25:01.199 "bdev_auto_examine": true, 00:25:01.199 "iobuf_small_cache_size": 128, 00:25:01.199 "iobuf_large_cache_size": 16 00:25:01.199 } 00:25:01.199 }, 00:25:01.199 { 00:25:01.199 "method": "bdev_raid_set_options", 00:25:01.199 "params": { 00:25:01.199 "process_window_size_kb": 1024, 00:25:01.199 "process_max_bandwidth_mb_sec": 0 00:25:01.199 } 00:25:01.199 }, 00:25:01.199 { 00:25:01.199 "method": "bdev_iscsi_set_options", 00:25:01.199 "params": { 00:25:01.199 "timeout_sec": 30 00:25:01.199 } 00:25:01.199 }, 00:25:01.199 { 00:25:01.199 "method": "bdev_nvme_set_options", 00:25:01.199 "params": { 00:25:01.199 "action_on_timeout": "none", 00:25:01.199 "timeout_us": 0, 00:25:01.199 "timeout_admin_us": 0, 00:25:01.199 "keep_alive_timeout_ms": 10000, 00:25:01.199 "arbitration_burst": 0, 00:25:01.199 "low_priority_weight": 0, 00:25:01.199 "medium_priority_weight": 0, 00:25:01.199 "high_priority_weight": 0, 00:25:01.199 "nvme_adminq_poll_period_us": 10000, 00:25:01.199 "nvme_ioq_poll_period_us": 0, 00:25:01.199 "io_queue_requests": 512, 00:25:01.199 "delay_cmd_submit": true, 00:25:01.199 "transport_retry_count": 4, 00:25:01.199 "bdev_retry_count": 3, 00:25:01.199 "transport_ack_timeout": 0, 00:25:01.200 "ctrlr_loss_timeout_sec": 0, 00:25:01.200 "reconnect_delay_sec": 0, 00:25:01.200 "fast_io_fail_timeout_sec": 0, 00:25:01.200 "disable_auto_failback": false, 00:25:01.200 "generate_uuids": false, 00:25:01.200 "transport_tos": 0, 00:25:01.200 "nvme_error_stat": false, 00:25:01.200 "rdma_srq_size": 0, 00:25:01.200 "io_path_stat": false, 00:25:01.200 "allow_accel_sequence": false, 00:25:01.200 "rdma_max_cq_size": 0, 00:25:01.200 "rdma_cm_event_timeout_ms": 0, 00:25:01.200 "dhchap_digests": [ 00:25:01.200 "sha256", 00:25:01.200 "sha384", 00:25:01.200 "sha512" 00:25:01.200 ], 00:25:01.200 "dhchap_dhgroups": [ 00:25:01.200 "null", 00:25:01.200 "ffdhe2048", 00:25:01.200 "ffdhe3072", 00:25:01.200 "ffdhe4096", 00:25:01.200 "ffdhe6144", 00:25:01.200 "ffdhe8192" 00:25:01.200 ] 00:25:01.200 } 00:25:01.200 }, 00:25:01.200 { 00:25:01.200 "method": "bdev_nvme_attach_controller", 00:25:01.200 "params": { 00:25:01.200 "name": "nvme0", 00:25:01.200 "trtype": "TCP", 00:25:01.200 "adrfam": "IPv4", 00:25:01.200 "traddr": "10.0.0.2", 00:25:01.200 "trsvcid": "4420", 00:25:01.200 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.200 "prchk_reftag": false, 00:25:01.200 "prchk_guard": false, 00:25:01.200 "ctrlr_loss_timeout_sec": 0, 00:25:01.200 "reconnect_delay_sec": 0, 00:25:01.200 "fast_io_fail_timeout_sec": 0, 00:25:01.200 "psk": "key0", 00:25:01.200 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:01.200 "hdgst": false, 00:25:01.200 "ddgst": false, 00:25:01.200 "multipath": "multipath" 00:25:01.200 } 00:25:01.200 }, 00:25:01.200 { 00:25:01.200 "method": "bdev_nvme_set_hotplug", 00:25:01.200 "params": { 00:25:01.200 "period_us": 100000, 00:25:01.200 "enable": false 00:25:01.200 } 00:25:01.200 }, 00:25:01.200 { 00:25:01.200 "method": "bdev_enable_histogram", 00:25:01.200 "params": { 00:25:01.200 "name": "nvme0n1", 00:25:01.200 "enable": true 00:25:01.200 } 00:25:01.200 }, 00:25:01.200 { 00:25:01.200 "method": "bdev_wait_for_examine" 00:25:01.200 } 00:25:01.200 ] 00:25:01.200 }, 00:25:01.200 { 00:25:01.200 "subsystem": "nbd", 00:25:01.200 "config": [] 00:25:01.200 } 00:25:01.200 ] 00:25:01.200 }' 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 280361 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280361 ']' 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280361 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280361 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280361' 00:25:01.200 killing process with pid 280361 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280361 00:25:01.200 Received shutdown signal, test time was about 1.000000 seconds 00:25:01.200 00:25:01.200 Latency(us) 00:25:01.200 [2024-11-19T15:30:51.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:01.200 [2024-11-19T15:30:51.539Z] =================================================================================================================== 00:25:01.200 [2024-11-19T15:30:51.539Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:01.200 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280361 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 280329 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280329 ']' 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280329 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280329 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280329' 00:25:01.459 killing process with pid 280329 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280329 00:25:01.459 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280329 00:25:01.718 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:25:01.718 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:01.718 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:25:01.718 "subsystems": [ 00:25:01.718 { 00:25:01.718 "subsystem": "keyring", 00:25:01.718 "config": [ 00:25:01.718 { 00:25:01.718 "method": "keyring_file_add_key", 00:25:01.718 "params": { 00:25:01.718 "name": "key0", 00:25:01.718 "path": "/tmp/tmp.Um4T2NlSTs" 00:25:01.718 } 00:25:01.718 } 00:25:01.718 ] 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "subsystem": "iobuf", 00:25:01.718 "config": [ 00:25:01.718 { 00:25:01.718 "method": "iobuf_set_options", 00:25:01.718 "params": { 00:25:01.718 "small_pool_count": 8192, 00:25:01.718 "large_pool_count": 1024, 00:25:01.718 "small_bufsize": 8192, 00:25:01.718 "large_bufsize": 135168, 00:25:01.718 "enable_numa": false 00:25:01.718 } 00:25:01.718 } 00:25:01.718 ] 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "subsystem": "sock", 00:25:01.718 "config": [ 00:25:01.718 { 00:25:01.718 "method": "sock_set_default_impl", 00:25:01.718 "params": { 00:25:01.718 "impl_name": "posix" 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "sock_impl_set_options", 00:25:01.718 "params": { 00:25:01.718 "impl_name": "ssl", 00:25:01.718 "recv_buf_size": 4096, 00:25:01.718 "send_buf_size": 4096, 00:25:01.718 "enable_recv_pipe": true, 00:25:01.718 "enable_quickack": false, 00:25:01.718 "enable_placement_id": 0, 00:25:01.718 "enable_zerocopy_send_server": true, 00:25:01.718 "enable_zerocopy_send_client": false, 00:25:01.718 "zerocopy_threshold": 0, 00:25:01.718 "tls_version": 0, 00:25:01.718 "enable_ktls": false 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "sock_impl_set_options", 00:25:01.718 "params": { 00:25:01.718 "impl_name": "posix", 00:25:01.718 "recv_buf_size": 2097152, 00:25:01.718 "send_buf_size": 2097152, 00:25:01.718 "enable_recv_pipe": true, 00:25:01.718 "enable_quickack": false, 00:25:01.718 "enable_placement_id": 0, 00:25:01.718 "enable_zerocopy_send_server": true, 00:25:01.718 "enable_zerocopy_send_client": false, 00:25:01.718 "zerocopy_threshold": 0, 00:25:01.718 "tls_version": 0, 00:25:01.718 "enable_ktls": false 00:25:01.718 } 00:25:01.718 } 00:25:01.718 ] 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "subsystem": "vmd", 00:25:01.718 "config": [] 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "subsystem": "accel", 00:25:01.718 "config": [ 00:25:01.718 { 00:25:01.718 "method": "accel_set_options", 00:25:01.718 "params": { 00:25:01.718 "small_cache_size": 128, 00:25:01.718 "large_cache_size": 16, 00:25:01.718 "task_count": 2048, 00:25:01.718 "sequence_count": 2048, 00:25:01.718 "buf_count": 2048 00:25:01.718 } 00:25:01.718 } 00:25:01.718 ] 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "subsystem": "bdev", 00:25:01.718 "config": [ 00:25:01.718 { 00:25:01.718 "method": "bdev_set_options", 00:25:01.718 "params": { 00:25:01.718 "bdev_io_pool_size": 65535, 00:25:01.718 "bdev_io_cache_size": 256, 00:25:01.718 "bdev_auto_examine": true, 00:25:01.718 "iobuf_small_cache_size": 128, 00:25:01.718 "iobuf_large_cache_size": 16 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "bdev_raid_set_options", 00:25:01.718 "params": { 00:25:01.718 "process_window_size_kb": 1024, 00:25:01.718 "process_max_bandwidth_mb_sec": 0 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "bdev_iscsi_set_options", 00:25:01.718 "params": { 00:25:01.718 "timeout_sec": 30 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "bdev_nvme_set_options", 00:25:01.718 "params": { 00:25:01.718 "action_on_timeout": "none", 00:25:01.718 "timeout_us": 0, 00:25:01.718 "timeout_admin_us": 0, 00:25:01.718 "keep_alive_timeout_ms": 10000, 00:25:01.718 "arbitration_burst": 0, 00:25:01.718 "low_priority_weight": 0, 00:25:01.718 "medium_priority_weight": 0, 00:25:01.718 "high_priority_weight": 0, 00:25:01.718 "nvme_adminq_poll_period_us": 10000, 00:25:01.718 "nvme_ioq_poll_period_us": 0, 00:25:01.718 "io_queue_requests": 0, 00:25:01.718 "delay_cmd_submit": true, 00:25:01.718 "transport_retry_count": 4, 00:25:01.718 "bdev_retry_count": 3, 00:25:01.718 "transport_ack_timeout": 0, 00:25:01.718 "ctrlr_loss_timeout_sec": 0, 00:25:01.718 "reconnect_delay_sec": 0, 00:25:01.718 "fast_io_fail_timeout_sec": 0, 00:25:01.718 "disable_auto_failback": false, 00:25:01.718 "generate_uuids": false, 00:25:01.718 "transport_tos": 0, 00:25:01.718 "nvme_error_stat": false, 00:25:01.718 "rdma_srq_size": 0, 00:25:01.718 "io_path_stat": false, 00:25:01.718 "allow_accel_sequence": false, 00:25:01.718 "rdma_max_cq_size": 0, 00:25:01.718 "rdma_cm_event_timeout_ms": 0, 00:25:01.718 "dhchap_digests": [ 00:25:01.718 "sha256", 00:25:01.718 "sha384", 00:25:01.718 "sha512" 00:25:01.718 ], 00:25:01.718 "dhchap_dhgroups": [ 00:25:01.718 "null", 00:25:01.718 "ffdhe2048", 00:25:01.718 "ffdhe3072", 00:25:01.718 "ffdhe4096", 00:25:01.718 "ffdhe6144", 00:25:01.718 "ffdhe8192" 00:25:01.718 ] 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "bdev_nvme_set_hotplug", 00:25:01.718 "params": { 00:25:01.718 "period_us": 100000, 00:25:01.718 "enable": false 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "bdev_malloc_create", 00:25:01.718 "params": { 00:25:01.718 "name": "malloc0", 00:25:01.718 "num_blocks": 8192, 00:25:01.718 "block_size": 4096, 00:25:01.718 "physical_block_size": 4096, 00:25:01.718 "uuid": "9fbd423a-cb12-40df-8210-d79f326641c1", 00:25:01.718 "optimal_io_boundary": 0, 00:25:01.718 "md_size": 0, 00:25:01.718 "dif_type": 0, 00:25:01.718 "dif_is_head_of_md": false, 00:25:01.718 "dif_pi_format": 0 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "bdev_wait_for_examine" 00:25:01.718 } 00:25:01.718 ] 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "subsystem": "nbd", 00:25:01.718 "config": [] 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "subsystem": "scheduler", 00:25:01.718 "config": [ 00:25:01.718 { 00:25:01.718 "method": "framework_set_scheduler", 00:25:01.718 "params": { 00:25:01.718 "name": "static" 00:25:01.718 } 00:25:01.718 } 00:25:01.718 ] 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "subsystem": "nvmf", 00:25:01.718 "config": [ 00:25:01.718 { 00:25:01.718 "method": "nvmf_set_config", 00:25:01.718 "params": { 00:25:01.718 "discovery_filter": "match_any", 00:25:01.718 "admin_cmd_passthru": { 00:25:01.718 "identify_ctrlr": false 00:25:01.718 }, 00:25:01.718 "dhchap_digests": [ 00:25:01.718 "sha256", 00:25:01.718 "sha384", 00:25:01.718 "sha512" 00:25:01.718 ], 00:25:01.718 "dhchap_dhgroups": [ 00:25:01.718 "null", 00:25:01.718 "ffdhe2048", 00:25:01.718 "ffdhe3072", 00:25:01.718 "ffdhe4096", 00:25:01.718 "ffdhe6144", 00:25:01.718 "ffdhe8192" 00:25:01.718 ] 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "nvmf_set_max_subsystems", 00:25:01.718 "params": { 00:25:01.718 "max_subsystems": 1024 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "nvmf_set_crdt", 00:25:01.718 "params": { 00:25:01.718 "crdt1": 0, 00:25:01.718 "crdt2": 0, 00:25:01.718 "crdt3": 0 00:25:01.718 } 00:25:01.718 }, 00:25:01.718 { 00:25:01.718 "method": "nvmf_create_transport", 00:25:01.718 "params": { 00:25:01.718 "trtype": "TCP", 00:25:01.718 "max_queue_depth": 128, 00:25:01.718 "max_io_qpairs_per_ctrlr": 127, 00:25:01.718 "in_capsule_data_size": 4096, 00:25:01.718 "max_io_size": 131072, 00:25:01.718 "io_unit_size": 131072, 00:25:01.718 "max_aq_depth": 128, 00:25:01.718 "num_shared_buffers": 511, 00:25:01.718 "buf_cache_size": 4294967295, 00:25:01.718 "dif_insert_or_strip": false, 00:25:01.718 "zcopy": false, 00:25:01.718 "c2h_success": false, 00:25:01.718 "sock_priority": 0, 00:25:01.718 "abort_timeout_sec": 1, 00:25:01.718 "ack_timeout": 0, 00:25:01.718 "data_wr_pool_size": 0 00:25:01.718 } 00:25:01.718 }, 00:25:01.719 { 00:25:01.719 "method": "nvmf_create_subsystem", 00:25:01.719 "params": { 00:25:01.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.719 "allow_any_host": false, 00:25:01.719 "serial_number": "00000000000000000000", 00:25:01.719 "model_number": "SPDK bdev Controller", 00:25:01.719 "max_namespaces": 32, 00:25:01.719 "min_cntlid": 1, 00:25:01.719 "max_cntlid": 65519, 00:25:01.719 "ana_reporting": false 00:25:01.719 } 00:25:01.719 }, 00:25:01.719 { 00:25:01.719 "method": "nvmf_subsystem_add_host", 00:25:01.719 "params": { 00:25:01.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.719 "host": "nqn.2016-06.io.spdk:host1", 00:25:01.719 "psk": "key0" 00:25:01.719 } 00:25:01.719 }, 00:25:01.719 { 00:25:01.719 "method": "nvmf_subsystem_add_ns", 00:25:01.719 "params": { 00:25:01.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.719 "namespace": { 00:25:01.719 "nsid": 1, 00:25:01.719 "bdev_name": "malloc0", 00:25:01.719 "nguid": "9FBD423ACB1240DF8210D79F326641C1", 00:25:01.719 "uuid": "9fbd423a-cb12-40df-8210-d79f326641c1", 00:25:01.719 "no_auto_visible": false 00:25:01.719 } 00:25:01.719 } 00:25:01.719 }, 00:25:01.719 { 00:25:01.719 "method": "nvmf_subsystem_add_listener", 00:25:01.719 "params": { 00:25:01.719 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:01.719 "listen_address": { 00:25:01.719 "trtype": "TCP", 00:25:01.719 "adrfam": "IPv4", 00:25:01.719 "traddr": "10.0.0.2", 00:25:01.719 "trsvcid": "4420" 00:25:01.719 }, 00:25:01.719 "secure_channel": false, 00:25:01.719 "sock_impl": "ssl" 00:25:01.719 } 00:25:01.719 } 00:25:01.719 ] 00:25:01.719 } 00:25:01.719 ] 00:25:01.719 }' 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=280766 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 280766 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280766 ']' 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:01.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:01.719 16:30:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:01.719 [2024-11-19 16:30:51.877660] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:25:01.719 [2024-11-19 16:30:51.877764] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.719 [2024-11-19 16:30:51.947264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.719 [2024-11-19 16:30:51.986653] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.719 [2024-11-19 16:30:51.986715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.719 [2024-11-19 16:30:51.986738] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.719 [2024-11-19 16:30:51.986748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.719 [2024-11-19 16:30:51.986758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.719 [2024-11-19 16:30:51.987360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.977 [2024-11-19 16:30:52.225880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.977 [2024-11-19 16:30:52.257915] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:01.977 [2024-11-19 16:30:52.258210] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:02.911 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:02.911 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=280917 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 280917 /var/tmp/bdevperf.sock 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 280917 ']' 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:02.912 16:30:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:25:02.912 "subsystems": [ 00:25:02.912 { 00:25:02.912 "subsystem": "keyring", 00:25:02.912 "config": [ 00:25:02.912 { 00:25:02.912 "method": "keyring_file_add_key", 00:25:02.912 "params": { 00:25:02.912 "name": "key0", 00:25:02.912 "path": "/tmp/tmp.Um4T2NlSTs" 00:25:02.912 } 00:25:02.912 } 00:25:02.912 ] 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "subsystem": "iobuf", 00:25:02.912 "config": [ 00:25:02.912 { 00:25:02.912 "method": "iobuf_set_options", 00:25:02.912 "params": { 00:25:02.912 "small_pool_count": 8192, 00:25:02.912 "large_pool_count": 1024, 00:25:02.912 "small_bufsize": 8192, 00:25:02.912 "large_bufsize": 135168, 00:25:02.912 "enable_numa": false 00:25:02.912 } 00:25:02.912 } 00:25:02.912 ] 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "subsystem": "sock", 00:25:02.912 "config": [ 00:25:02.912 { 00:25:02.912 "method": "sock_set_default_impl", 00:25:02.912 "params": { 00:25:02.912 "impl_name": "posix" 00:25:02.912 } 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "method": "sock_impl_set_options", 00:25:02.912 "params": { 00:25:02.912 "impl_name": "ssl", 00:25:02.912 "recv_buf_size": 4096, 00:25:02.912 "send_buf_size": 4096, 00:25:02.912 "enable_recv_pipe": true, 00:25:02.912 "enable_quickack": false, 00:25:02.912 "enable_placement_id": 0, 00:25:02.912 "enable_zerocopy_send_server": true, 00:25:02.912 "enable_zerocopy_send_client": false, 00:25:02.912 "zerocopy_threshold": 0, 00:25:02.912 "tls_version": 0, 00:25:02.912 "enable_ktls": false 00:25:02.912 } 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "method": "sock_impl_set_options", 00:25:02.912 "params": { 00:25:02.912 "impl_name": "posix", 00:25:02.912 "recv_buf_size": 2097152, 00:25:02.912 "send_buf_size": 2097152, 00:25:02.912 "enable_recv_pipe": true, 00:25:02.912 "enable_quickack": false, 00:25:02.912 "enable_placement_id": 0, 00:25:02.912 "enable_zerocopy_send_server": true, 00:25:02.912 "enable_zerocopy_send_client": false, 00:25:02.912 "zerocopy_threshold": 0, 00:25:02.912 "tls_version": 0, 00:25:02.912 "enable_ktls": false 00:25:02.912 } 00:25:02.912 } 00:25:02.912 ] 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "subsystem": "vmd", 00:25:02.912 "config": [] 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "subsystem": "accel", 00:25:02.912 "config": [ 00:25:02.912 { 00:25:02.912 "method": "accel_set_options", 00:25:02.912 "params": { 00:25:02.912 "small_cache_size": 128, 00:25:02.912 "large_cache_size": 16, 00:25:02.912 "task_count": 2048, 00:25:02.912 "sequence_count": 2048, 00:25:02.912 "buf_count": 2048 00:25:02.912 } 00:25:02.912 } 00:25:02.912 ] 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "subsystem": "bdev", 00:25:02.912 "config": [ 00:25:02.912 { 00:25:02.912 "method": "bdev_set_options", 00:25:02.912 "params": { 00:25:02.912 "bdev_io_pool_size": 65535, 00:25:02.912 "bdev_io_cache_size": 256, 00:25:02.912 "bdev_auto_examine": true, 00:25:02.912 "iobuf_small_cache_size": 128, 00:25:02.912 "iobuf_large_cache_size": 16 00:25:02.912 } 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "method": "bdev_raid_set_options", 00:25:02.912 "params": { 00:25:02.912 "process_window_size_kb": 1024, 00:25:02.912 "process_max_bandwidth_mb_sec": 0 00:25:02.912 } 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "method": "bdev_iscsi_set_options", 00:25:02.912 "params": { 00:25:02.912 "timeout_sec": 30 00:25:02.912 } 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "method": "bdev_nvme_set_options", 00:25:02.912 "params": { 00:25:02.912 "action_on_timeout": "none", 00:25:02.912 "timeout_us": 0, 00:25:02.912 "timeout_admin_us": 0, 00:25:02.912 "keep_alive_timeout_ms": 10000, 00:25:02.912 "arbitration_burst": 0, 00:25:02.912 "low_priority_weight": 0, 00:25:02.912 "medium_priority_weight": 0, 00:25:02.912 "high_priority_weight": 0, 00:25:02.912 "nvme_adminq_poll_period_us": 10000, 00:25:02.912 "nvme_ioq_poll_period_us": 0, 00:25:02.912 "io_queue_requests": 512, 00:25:02.912 "delay_cmd_submit": true, 00:25:02.912 "transport_retry_count": 4, 00:25:02.912 "bdev_retry_count": 3, 00:25:02.912 "transport_ack_timeout": 0, 00:25:02.912 "ctrlr_loss_timeout_sec": 0, 00:25:02.912 "reconnect_delay_sec": 0, 00:25:02.912 "fast_io_fail_timeout_sec": 0, 00:25:02.912 "disable_auto_failback": false, 00:25:02.912 "generate_uuids": false, 00:25:02.912 "transport_tos": 0, 00:25:02.912 "nvme_error_stat": false, 00:25:02.912 "rdma_srq_size": 0, 00:25:02.912 "io_path_stat": false, 00:25:02.912 "allow_accel_sequence": false, 00:25:02.912 "rdma_max_cq_size": 0, 00:25:02.912 "rdma_cm_event_timeout_ms": 0, 00:25:02.912 "dhchap_digests": [ 00:25:02.912 "sha256", 00:25:02.912 "sha384", 00:25:02.912 "sha512" 00:25:02.912 ], 00:25:02.912 "dhchap_dhgroups": [ 00:25:02.912 "null", 00:25:02.912 "ffdhe2048", 00:25:02.912 "ffdhe3072", 00:25:02.912 "ffdhe4096", 00:25:02.912 "ffdhe6144", 00:25:02.912 "ffdhe8192" 00:25:02.912 ] 00:25:02.912 } 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "method": "bdev_nvme_attach_controller", 00:25:02.912 "params": { 00:25:02.912 "name": "nvme0", 00:25:02.912 "trtype": "TCP", 00:25:02.912 "adrfam": "IPv4", 00:25:02.912 "traddr": "10.0.0.2", 00:25:02.912 "trsvcid": "4420", 00:25:02.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:02.912 "prchk_reftag": false, 00:25:02.912 "prchk_guard": false, 00:25:02.912 "ctrlr_loss_timeout_sec": 0, 00:25:02.912 "reconnect_delay_sec": 0, 00:25:02.912 "fast_io_fail_timeout_sec": 0, 00:25:02.912 "psk": "key0", 00:25:02.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:02.912 "hdgst": false, 00:25:02.912 "ddgst": false, 00:25:02.912 "multipath": "multipath" 00:25:02.912 } 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "method": "bdev_nvme_set_hotplug", 00:25:02.912 "params": { 00:25:02.912 "period_us": 100000, 00:25:02.912 "enable": false 00:25:02.912 } 00:25:02.912 }, 00:25:02.912 { 00:25:02.912 "method": "bdev_enable_histogram", 00:25:02.912 "params": { 00:25:02.912 "name": "nvme0n1", 00:25:02.912 "enable": true 00:25:02.912 } 00:25:02.912 }, 00:25:02.912 { 00:25:02.913 "method": "bdev_wait_for_examine" 00:25:02.913 } 00:25:02.913 ] 00:25:02.913 }, 00:25:02.913 { 00:25:02.913 "subsystem": "nbd", 00:25:02.913 "config": [] 00:25:02.913 } 00:25:02.913 ] 00:25:02.913 }' 00:25:02.913 [2024-11-19 16:30:52.983359] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:25:02.913 [2024-11-19 16:30:52.983461] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid280917 ] 00:25:02.913 [2024-11-19 16:30:53.057259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.913 [2024-11-19 16:30:53.106369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.171 [2024-11-19 16:30:53.285213] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.171 16:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:03.171 16:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:25:03.171 16:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:03.171 16:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:25:03.428 16:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.428 16:30:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:03.686 Running I/O for 1 seconds... 00:25:04.620 3158.00 IOPS, 12.34 MiB/s 00:25:04.620 Latency(us) 00:25:04.620 [2024-11-19T15:30:54.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.620 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:04.620 Verification LBA range: start 0x0 length 0x2000 00:25:04.620 nvme0n1 : 1.02 3211.07 12.54 0.00 0.00 39463.99 6990.51 43496.49 00:25:04.620 [2024-11-19T15:30:54.959Z] =================================================================================================================== 00:25:04.620 [2024-11-19T15:30:54.959Z] Total : 3211.07 12.54 0.00 0.00 39463.99 6990.51 43496.49 00:25:04.620 { 00:25:04.620 "results": [ 00:25:04.620 { 00:25:04.620 "job": "nvme0n1", 00:25:04.620 "core_mask": "0x2", 00:25:04.620 "workload": "verify", 00:25:04.620 "status": "finished", 00:25:04.620 "verify_range": { 00:25:04.620 "start": 0, 00:25:04.620 "length": 8192 00:25:04.620 }, 00:25:04.620 "queue_depth": 128, 00:25:04.620 "io_size": 4096, 00:25:04.620 "runtime": 1.023646, 00:25:04.620 "iops": 3211.071014784408, 00:25:04.620 "mibps": 12.543246151501593, 00:25:04.620 "io_failed": 0, 00:25:04.620 "io_timeout": 0, 00:25:04.620 "avg_latency_us": 39463.993794634305, 00:25:04.620 "min_latency_us": 6990.506666666667, 00:25:04.620 "max_latency_us": 43496.485925925925 00:25:04.620 } 00:25:04.620 ], 00:25:04.620 "core_count": 1 00:25:04.620 } 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:04.620 nvmf_trace.0 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 280917 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280917 ']' 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280917 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280917 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280917' 00:25:04.620 killing process with pid 280917 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280917 00:25:04.620 Received shutdown signal, test time was about 1.000000 seconds 00:25:04.620 00:25:04.620 Latency(us) 00:25:04.620 [2024-11-19T15:30:54.959Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.620 [2024-11-19T15:30:54.959Z] =================================================================================================================== 00:25:04.620 [2024-11-19T15:30:54.959Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:04.620 16:30:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280917 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:04.879 rmmod nvme_tcp 00:25:04.879 rmmod nvme_fabrics 00:25:04.879 rmmod nvme_keyring 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 280766 ']' 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 280766 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 280766 ']' 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 280766 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.879 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 280766 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 280766' 00:25:05.138 killing process with pid 280766 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 280766 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 280766 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.138 16:30:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.06KqLsDC7c /tmp/tmp.G1VI6XdJsa /tmp/tmp.Um4T2NlSTs 00:25:07.682 00:25:07.682 real 1m21.988s 00:25:07.682 user 2m17.521s 00:25:07.682 sys 0m24.615s 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:25:07.682 ************************************ 00:25:07.682 END TEST nvmf_tls 00:25:07.682 ************************************ 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:07.682 ************************************ 00:25:07.682 START TEST nvmf_fips 00:25:07.682 ************************************ 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:07.682 * Looking for test storage... 00:25:07.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.682 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.683 --rc genhtml_branch_coverage=1 00:25:07.683 --rc genhtml_function_coverage=1 00:25:07.683 --rc genhtml_legend=1 00:25:07.683 --rc geninfo_all_blocks=1 00:25:07.683 --rc geninfo_unexecuted_blocks=1 00:25:07.683 00:25:07.683 ' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.683 --rc genhtml_branch_coverage=1 00:25:07.683 --rc genhtml_function_coverage=1 00:25:07.683 --rc genhtml_legend=1 00:25:07.683 --rc geninfo_all_blocks=1 00:25:07.683 --rc geninfo_unexecuted_blocks=1 00:25:07.683 00:25:07.683 ' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.683 --rc genhtml_branch_coverage=1 00:25:07.683 --rc genhtml_function_coverage=1 00:25:07.683 --rc genhtml_legend=1 00:25:07.683 --rc geninfo_all_blocks=1 00:25:07.683 --rc geninfo_unexecuted_blocks=1 00:25:07.683 00:25:07.683 ' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:07.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.683 --rc genhtml_branch_coverage=1 00:25:07.683 --rc genhtml_function_coverage=1 00:25:07.683 --rc genhtml_legend=1 00:25:07.683 --rc geninfo_all_blocks=1 00:25:07.683 --rc geninfo_unexecuted_blocks=1 00:25:07.683 00:25:07.683 ' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.683 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:25:07.684 Error setting digest 00:25:07.684 40A2B6952F7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:25:07.684 40A2B6952F7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.684 16:30:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:10.215 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:10.216 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:10.216 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:10.216 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:10.216 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:10.216 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:10.216 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:25:10.216 00:25:10.216 --- 10.0.0.2 ping statistics --- 00:25:10.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.216 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:10.216 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:10.216 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:25:10.216 00:25:10.216 --- 10.0.0.1 ping statistics --- 00:25:10.216 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:10.216 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=283152 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 283152 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 283152 ']' 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.216 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:10.216 [2024-11-19 16:31:00.295001] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:25:10.216 [2024-11-19 16:31:00.295107] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:10.216 [2024-11-19 16:31:00.365182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.216 [2024-11-19 16:31:00.409014] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:10.216 [2024-11-19 16:31:00.409077] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:10.216 [2024-11-19 16:31:00.409117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:10.216 [2024-11-19 16:31:00.409128] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:10.216 [2024-11-19 16:31:00.409138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:10.216 [2024-11-19 16:31:00.409791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.h55 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.h55 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.h55 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.h55 00:25:10.217 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:10.475 [2024-11-19 16:31:00.802925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:10.733 [2024-11-19 16:31:00.818915] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:10.733 [2024-11-19 16:31:00.819190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:10.733 malloc0 00:25:10.733 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:10.733 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=283300 00:25:10.733 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:10.733 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 283300 /var/tmp/bdevperf.sock 00:25:10.734 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 283300 ']' 00:25:10.734 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:10.734 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:10.734 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:10.734 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:10.734 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:10.734 16:31:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:10.734 [2024-11-19 16:31:00.944365] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:25:10.734 [2024-11-19 16:31:00.944453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid283300 ] 00:25:10.734 [2024-11-19 16:31:01.009473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.734 [2024-11-19 16:31:01.054696] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.991 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.991 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:25:10.991 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.h55 00:25:11.248 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:11.506 [2024-11-19 16:31:01.658740] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:11.506 TLSTESTn1 00:25:11.506 16:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:11.763 Running I/O for 10 seconds... 00:25:13.638 3242.00 IOPS, 12.66 MiB/s [2024-11-19T15:31:04.909Z] 3287.50 IOPS, 12.84 MiB/s [2024-11-19T15:31:06.282Z] 3362.00 IOPS, 13.13 MiB/s [2024-11-19T15:31:07.215Z] 3394.25 IOPS, 13.26 MiB/s [2024-11-19T15:31:08.148Z] 3408.60 IOPS, 13.31 MiB/s [2024-11-19T15:31:09.083Z] 3426.33 IOPS, 13.38 MiB/s [2024-11-19T15:31:10.016Z] 3431.43 IOPS, 13.40 MiB/s [2024-11-19T15:31:11.009Z] 3432.12 IOPS, 13.41 MiB/s [2024-11-19T15:31:12.029Z] 3422.33 IOPS, 13.37 MiB/s [2024-11-19T15:31:12.029Z] 3430.80 IOPS, 13.40 MiB/s 00:25:21.690 Latency(us) 00:25:21.690 [2024-11-19T15:31:12.029Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.690 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:21.690 Verification LBA range: start 0x0 length 0x2000 00:25:21.690 TLSTESTn1 : 10.03 3433.89 13.41 0.00 0.00 37209.83 6796.33 51652.08 00:25:21.690 [2024-11-19T15:31:12.029Z] =================================================================================================================== 00:25:21.690 [2024-11-19T15:31:12.029Z] Total : 3433.89 13.41 0.00 0.00 37209.83 6796.33 51652.08 00:25:21.690 { 00:25:21.690 "results": [ 00:25:21.690 { 00:25:21.690 "job": "TLSTESTn1", 00:25:21.690 "core_mask": "0x4", 00:25:21.690 "workload": "verify", 00:25:21.690 "status": "finished", 00:25:21.690 "verify_range": { 00:25:21.690 "start": 0, 00:25:21.690 "length": 8192 00:25:21.690 }, 00:25:21.690 "queue_depth": 128, 00:25:21.690 "io_size": 4096, 00:25:21.690 "runtime": 10.027978, 00:25:21.690 "iops": 3433.892655129479, 00:25:21.690 "mibps": 13.413643184099527, 00:25:21.690 "io_failed": 0, 00:25:21.690 "io_timeout": 0, 00:25:21.690 "avg_latency_us": 37209.83087345455, 00:25:21.690 "min_latency_us": 6796.325925925926, 00:25:21.690 "max_latency_us": 51652.07703703704 00:25:21.690 } 00:25:21.690 ], 00:25:21.690 "core_count": 1 00:25:21.690 } 00:25:21.690 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:25:21.690 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:25:21.690 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:25:21.690 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:25:21.690 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:25:21.690 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:21.690 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:25:21.690 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:25:21.690 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:25:21.690 16:31:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:21.690 nvmf_trace.0 00:25:21.690 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:25:21.690 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 283300 00:25:21.690 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 283300 ']' 00:25:21.690 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 283300 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283300 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283300' 00:25:21.984 killing process with pid 283300 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 283300 00:25:21.984 Received shutdown signal, test time was about 10.000000 seconds 00:25:21.984 00:25:21.984 Latency(us) 00:25:21.984 [2024-11-19T15:31:12.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:21.984 [2024-11-19T15:31:12.323Z] =================================================================================================================== 00:25:21.984 [2024-11-19T15:31:12.323Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 283300 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:21.984 rmmod nvme_tcp 00:25:21.984 rmmod nvme_fabrics 00:25:21.984 rmmod nvme_keyring 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 283152 ']' 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 283152 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 283152 ']' 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 283152 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:21.984 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 283152 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 283152' 00:25:22.269 killing process with pid 283152 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 283152 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 283152 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:22.269 16:31:12 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.h55 00:25:24.810 00:25:24.810 real 0m17.023s 00:25:24.810 user 0m22.493s 00:25:24.810 sys 0m5.315s 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:25:24.810 ************************************ 00:25:24.810 END TEST nvmf_fips 00:25:24.810 ************************************ 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:24.810 ************************************ 00:25:24.810 START TEST nvmf_control_msg_list 00:25:24.810 ************************************ 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:25:24.810 * Looking for test storage... 00:25:24.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:24.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.810 --rc genhtml_branch_coverage=1 00:25:24.810 --rc genhtml_function_coverage=1 00:25:24.810 --rc genhtml_legend=1 00:25:24.810 --rc geninfo_all_blocks=1 00:25:24.810 --rc geninfo_unexecuted_blocks=1 00:25:24.810 00:25:24.810 ' 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:24.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.810 --rc genhtml_branch_coverage=1 00:25:24.810 --rc genhtml_function_coverage=1 00:25:24.810 --rc genhtml_legend=1 00:25:24.810 --rc geninfo_all_blocks=1 00:25:24.810 --rc geninfo_unexecuted_blocks=1 00:25:24.810 00:25:24.810 ' 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:24.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.810 --rc genhtml_branch_coverage=1 00:25:24.810 --rc genhtml_function_coverage=1 00:25:24.810 --rc genhtml_legend=1 00:25:24.810 --rc geninfo_all_blocks=1 00:25:24.810 --rc geninfo_unexecuted_blocks=1 00:25:24.810 00:25:24.810 ' 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:24.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.810 --rc genhtml_branch_coverage=1 00:25:24.810 --rc genhtml_function_coverage=1 00:25:24.810 --rc genhtml_legend=1 00:25:24.810 --rc geninfo_all_blocks=1 00:25:24.810 --rc geninfo_unexecuted_blocks=1 00:25:24.810 00:25:24.810 ' 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.810 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:24.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:25:24.811 16:31:14 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:26.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.716 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:26.717 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:26.717 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:26.717 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.717 16:31:16 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:26.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:25:26.717 00:25:26.717 --- 10.0.0.2 ping statistics --- 00:25:26.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.717 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:25:26.717 00:25:26.717 --- 10.0.0.1 ping statistics --- 00:25:26.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.717 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=286575 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 286575 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 286575 ']' 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.717 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:26.718 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:26.718 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:26.718 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:26.976 [2024-11-19 16:31:17.081441] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:25:26.976 [2024-11-19 16:31:17.081517] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:26.976 [2024-11-19 16:31:17.151694] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.976 [2024-11-19 16:31:17.194561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.976 [2024-11-19 16:31:17.194622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.976 [2024-11-19 16:31:17.194645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.976 [2024-11-19 16:31:17.194655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.976 [2024-11-19 16:31:17.194664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.976 [2024-11-19 16:31:17.195226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:26.976 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:26.976 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:25:26.976 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:26.976 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:26.976 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:27.234 [2024-11-19 16:31:17.333849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:27.234 Malloc0 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:27.234 [2024-11-19 16:31:17.373657] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=286603 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=286604 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=286605 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:27.234 16:31:17 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 286603 00:25:27.234 [2024-11-19 16:31:17.452558] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:27.234 [2024-11-19 16:31:17.452846] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:27.234 [2024-11-19 16:31:17.453099] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:28.606 Initializing NVMe Controllers 00:25:28.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:28.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:25:28.606 Initialization complete. Launching workers. 00:25:28.606 ======================================================== 00:25:28.606 Latency(us) 00:25:28.606 Device Information : IOPS MiB/s Average min max 00:25:28.606 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3853.98 15.05 259.09 194.27 605.96 00:25:28.606 ======================================================== 00:25:28.606 Total : 3853.98 15.05 259.09 194.27 605.96 00:25:28.606 00:25:28.606 Initializing NVMe Controllers 00:25:28.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:28.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:25:28.606 Initialization complete. Launching workers. 00:25:28.606 ======================================================== 00:25:28.606 Latency(us) 00:25:28.606 Device Information : IOPS MiB/s Average min max 00:25:28.606 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 3859.00 15.07 258.75 171.95 617.23 00:25:28.606 ======================================================== 00:25:28.606 Total : 3859.00 15.07 258.75 171.95 617.23 00:25:28.606 00:25:28.606 [2024-11-19 16:31:18.556130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1150e50 is same with the state(6) to be set 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 286604 00:25:28.606 Initializing NVMe Controllers 00:25:28.606 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:28.606 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:25:28.606 Initialization complete. Launching workers. 00:25:28.606 ======================================================== 00:25:28.606 Latency(us) 00:25:28.606 Device Information : IOPS MiB/s Average min max 00:25:28.606 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40878.98 40404.74 40941.27 00:25:28.606 ======================================================== 00:25:28.606 Total : 25.00 0.10 40878.98 40404.74 40941.27 00:25:28.606 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 286605 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:28.606 rmmod nvme_tcp 00:25:28.606 rmmod nvme_fabrics 00:25:28.606 rmmod nvme_keyring 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 286575 ']' 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 286575 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 286575 ']' 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 286575 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 286575 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:28.606 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 286575' 00:25:28.606 killing process with pid 286575 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 286575 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 286575 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:28.607 16:31:18 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.150 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:31.150 00:25:31.150 real 0m6.313s 00:25:31.150 user 0m5.511s 00:25:31.150 sys 0m2.628s 00:25:31.150 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:31.150 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:25:31.150 ************************************ 00:25:31.150 END TEST nvmf_control_msg_list 00:25:31.150 ************************************ 00:25:31.150 16:31:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:31.151 16:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:31.151 16:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:31.151 16:31:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:31.151 ************************************ 00:25:31.151 START TEST nvmf_wait_for_buf 00:25:31.151 ************************************ 00:25:31.151 16:31:20 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:25:31.151 * Looking for test storage... 00:25:31.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:31.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.151 --rc genhtml_branch_coverage=1 00:25:31.151 --rc genhtml_function_coverage=1 00:25:31.151 --rc genhtml_legend=1 00:25:31.151 --rc geninfo_all_blocks=1 00:25:31.151 --rc geninfo_unexecuted_blocks=1 00:25:31.151 00:25:31.151 ' 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:31.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.151 --rc genhtml_branch_coverage=1 00:25:31.151 --rc genhtml_function_coverage=1 00:25:31.151 --rc genhtml_legend=1 00:25:31.151 --rc geninfo_all_blocks=1 00:25:31.151 --rc geninfo_unexecuted_blocks=1 00:25:31.151 00:25:31.151 ' 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:31.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.151 --rc genhtml_branch_coverage=1 00:25:31.151 --rc genhtml_function_coverage=1 00:25:31.151 --rc genhtml_legend=1 00:25:31.151 --rc geninfo_all_blocks=1 00:25:31.151 --rc geninfo_unexecuted_blocks=1 00:25:31.151 00:25:31.151 ' 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:31.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:31.151 --rc genhtml_branch_coverage=1 00:25:31.151 --rc genhtml_function_coverage=1 00:25:31.151 --rc genhtml_legend=1 00:25:31.151 --rc geninfo_all_blocks=1 00:25:31.151 --rc geninfo_unexecuted_blocks=1 00:25:31.151 00:25:31.151 ' 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.151 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:31.152 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:31.152 16:31:21 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:33.053 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:33.054 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:33.054 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:33.054 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:33.054 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:33.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:33.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.343 ms 00:25:33.054 00:25:33.054 --- 10.0.0.2 ping statistics --- 00:25:33.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.054 rtt min/avg/max/mdev = 0.343/0.343/0.343/0.000 ms 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:33.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:33.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:25:33.054 00:25:33.054 --- 10.0.0.1 ping statistics --- 00:25:33.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:33.054 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:33.054 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.313 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=288677 00:25:33.313 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:33.313 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 288677 00:25:33.313 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 288677 ']' 00:25:33.313 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:33.313 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:33.313 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:33.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:33.313 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:33.313 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.313 [2024-11-19 16:31:23.440752] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:25:33.313 [2024-11-19 16:31:23.440823] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:33.313 [2024-11-19 16:31:23.509495] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.313 [2024-11-19 16:31:23.550892] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:33.313 [2024-11-19 16:31:23.550953] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:33.313 [2024-11-19 16:31:23.550966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:33.313 [2024-11-19 16:31:23.550976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:33.313 [2024-11-19 16:31:23.550985] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:33.313 [2024-11-19 16:31:23.551603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.572 Malloc0 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.572 [2024-11-19 16:31:23.855867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:33.572 [2024-11-19 16:31:23.880128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.572 16:31:23 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:33.831 [2024-11-19 16:31:23.980179] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:35.205 Initializing NVMe Controllers 00:25:35.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:25:35.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:25:35.205 Initialization complete. Launching workers. 00:25:35.205 ======================================================== 00:25:35.205 Latency(us) 00:25:35.205 Device Information : IOPS MiB/s Average min max 00:25:35.205 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 129.00 16.12 32289.89 7981.51 63847.04 00:25:35.205 ======================================================== 00:25:35.205 Total : 129.00 16.12 32289.89 7981.51 63847.04 00:25:35.205 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:35.205 rmmod nvme_tcp 00:25:35.205 rmmod nvme_fabrics 00:25:35.205 rmmod nvme_keyring 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 288677 ']' 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 288677 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 288677 ']' 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 288677 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 288677 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 288677' 00:25:35.205 killing process with pid 288677 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 288677 00:25:35.205 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 288677 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:35.463 16:31:25 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:38.003 00:25:38.003 real 0m6.774s 00:25:38.003 user 0m3.289s 00:25:38.003 sys 0m2.001s 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:25:38.003 ************************************ 00:25:38.003 END TEST nvmf_wait_for_buf 00:25:38.003 ************************************ 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:38.003 ************************************ 00:25:38.003 START TEST nvmf_fuzz 00:25:38.003 ************************************ 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:25:38.003 * Looking for test storage... 00:25:38.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:25:38.003 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.004 --rc genhtml_branch_coverage=1 00:25:38.004 --rc genhtml_function_coverage=1 00:25:38.004 --rc genhtml_legend=1 00:25:38.004 --rc geninfo_all_blocks=1 00:25:38.004 --rc geninfo_unexecuted_blocks=1 00:25:38.004 00:25:38.004 ' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.004 --rc genhtml_branch_coverage=1 00:25:38.004 --rc genhtml_function_coverage=1 00:25:38.004 --rc genhtml_legend=1 00:25:38.004 --rc geninfo_all_blocks=1 00:25:38.004 --rc geninfo_unexecuted_blocks=1 00:25:38.004 00:25:38.004 ' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.004 --rc genhtml_branch_coverage=1 00:25:38.004 --rc genhtml_function_coverage=1 00:25:38.004 --rc genhtml_legend=1 00:25:38.004 --rc geninfo_all_blocks=1 00:25:38.004 --rc geninfo_unexecuted_blocks=1 00:25:38.004 00:25:38.004 ' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:38.004 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:38.004 --rc genhtml_branch_coverage=1 00:25:38.004 --rc genhtml_function_coverage=1 00:25:38.004 --rc genhtml_legend=1 00:25:38.004 --rc geninfo_all_blocks=1 00:25:38.004 --rc geninfo_unexecuted_blocks=1 00:25:38.004 00:25:38.004 ' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:38.004 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:25:38.004 16:31:27 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:39.921 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:39.921 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:39.921 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:39.921 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:39.921 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:39.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:39.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:25:39.922 00:25:39.922 --- 10.0.0.2 ping statistics --- 00:25:39.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.922 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:39.922 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:39.922 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.159 ms 00:25:39.922 00:25:39.922 --- 10.0.0.1 ping statistics --- 00:25:39.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:39.922 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=290890 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 290890 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 290890 ']' 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:39.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:39.922 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:40.180 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:40.180 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:25:40.180 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:40.180 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.181 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:40.181 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.181 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:25:40.181 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.181 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:40.439 Malloc0 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.439 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:40.440 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.440 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:25:40.440 16:31:30 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:26:12.502 Fuzzing completed. Shutting down the fuzz application 00:26:12.502 00:26:12.502 Dumping successful admin opcodes: 00:26:12.502 8, 9, 10, 24, 00:26:12.502 Dumping successful io opcodes: 00:26:12.502 0, 9, 00:26:12.502 NS: 0x2000008eff00 I/O qp, Total commands completed: 493712, total successful commands: 2840, random_seed: 455173696 00:26:12.502 NS: 0x2000008eff00 admin qp, Total commands completed: 60288, total successful commands: 477, random_seed: 3682864000 00:26:12.502 16:32:00 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:26:12.502 Fuzzing completed. Shutting down the fuzz application 00:26:12.502 00:26:12.502 Dumping successful admin opcodes: 00:26:12.502 24, 00:26:12.502 Dumping successful io opcodes: 00:26:12.502 00:26:12.502 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3907796443 00:26:12.502 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 3907907386 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:12.502 rmmod nvme_tcp 00:26:12.502 rmmod nvme_fabrics 00:26:12.502 rmmod nvme_keyring 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 290890 ']' 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 290890 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 290890 ']' 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 290890 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 290890 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 290890' 00:26:12.502 killing process with pid 290890 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 290890 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 290890 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:12.502 16:32:02 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.411 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:14.411 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:26:14.411 00:26:14.411 real 0m36.876s 00:26:14.411 user 0m51.252s 00:26:14.411 sys 0m14.550s 00:26:14.411 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:14.411 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:26:14.411 ************************************ 00:26:14.411 END TEST nvmf_fuzz 00:26:14.411 ************************************ 00:26:14.411 16:32:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:14.411 16:32:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:14.411 16:32:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:14.411 16:32:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:14.411 ************************************ 00:26:14.412 START TEST nvmf_multiconnection 00:26:14.412 ************************************ 00:26:14.412 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:26:14.670 * Looking for test storage... 00:26:14.670 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:26:14.670 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.671 --rc genhtml_branch_coverage=1 00:26:14.671 --rc genhtml_function_coverage=1 00:26:14.671 --rc genhtml_legend=1 00:26:14.671 --rc geninfo_all_blocks=1 00:26:14.671 --rc geninfo_unexecuted_blocks=1 00:26:14.671 00:26:14.671 ' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.671 --rc genhtml_branch_coverage=1 00:26:14.671 --rc genhtml_function_coverage=1 00:26:14.671 --rc genhtml_legend=1 00:26:14.671 --rc geninfo_all_blocks=1 00:26:14.671 --rc geninfo_unexecuted_blocks=1 00:26:14.671 00:26:14.671 ' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.671 --rc genhtml_branch_coverage=1 00:26:14.671 --rc genhtml_function_coverage=1 00:26:14.671 --rc genhtml_legend=1 00:26:14.671 --rc geninfo_all_blocks=1 00:26:14.671 --rc geninfo_unexecuted_blocks=1 00:26:14.671 00:26:14.671 ' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:14.671 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:14.671 --rc genhtml_branch_coverage=1 00:26:14.671 --rc genhtml_function_coverage=1 00:26:14.671 --rc genhtml_legend=1 00:26:14.671 --rc geninfo_all_blocks=1 00:26:14.671 --rc geninfo_unexecuted_blocks=1 00:26:14.671 00:26:14.671 ' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:14.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:26:14.671 16:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:17.208 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:17.208 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:17.208 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:17.209 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:17.209 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:17.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:17.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:26:17.209 00:26:17.209 --- 10.0.0.2 ping statistics --- 00:26:17.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.209 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:17.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:17.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:26:17.209 00:26:17.209 --- 10.0.0.1 ping statistics --- 00:26:17.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:17.209 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=296616 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 296616 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 296616 ']' 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:17.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.209 [2024-11-19 16:32:07.297548] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:26:17.209 [2024-11-19 16:32:07.297655] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:17.209 [2024-11-19 16:32:07.367966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:17.209 [2024-11-19 16:32:07.412889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:17.209 [2024-11-19 16:32:07.412946] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:17.209 [2024-11-19 16:32:07.412970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:17.209 [2024-11-19 16:32:07.412980] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:17.209 [2024-11-19 16:32:07.412989] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:17.209 [2024-11-19 16:32:07.414538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.209 [2024-11-19 16:32:07.414605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:17.209 [2024-11-19 16:32:07.414673] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:17.209 [2024-11-19 16:32:07.414671] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:17.209 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.468 [2024-11-19 16:32:07.549926] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.468 Malloc1 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.468 [2024-11-19 16:32:07.618870] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.468 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 Malloc2 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 Malloc3 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 Malloc4 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 Malloc5 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.469 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 Malloc6 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 Malloc7 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 Malloc8 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 Malloc9 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.728 16:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.728 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.728 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.729 Malloc10 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.729 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.987 Malloc11 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:17.987 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:18.551 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:26:18.551 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:18.551 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:18.551 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:18.551 16:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:21.073 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:21.073 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:21.073 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:26:21.073 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:21.073 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:21.073 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:21.073 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:21.073 16:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:26:21.329 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:26:21.329 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:21.329 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:21.329 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:21.329 16:32:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:23.228 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:23.228 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:23.228 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:26:23.228 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:23.228 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:23.228 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:23.228 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:23.228 16:32:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:26:24.163 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:26:24.163 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:24.163 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:24.163 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:24.163 16:32:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:26.066 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:26.066 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:26.066 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:26:26.066 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:26.066 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:26.066 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:26.066 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:26.066 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:26:26.634 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:26:26.634 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:26.634 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:26.634 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:26.634 16:32:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:29.165 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:29.165 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:29.165 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:26:29.165 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:29.165 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:29.165 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:29.165 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:29.165 16:32:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:26:29.423 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:26:29.423 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:29.423 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:29.423 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:29.423 16:32:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:31.956 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:31.956 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:31.956 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:26:31.956 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:31.956 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:31.956 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:31.956 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.956 16:32:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:26:32.216 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:26:32.216 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:32.216 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:32.216 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:32.216 16:32:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:34.117 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:34.117 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:34.117 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:26:34.375 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:34.375 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:34.375 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:34.375 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:34.376 16:32:24 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:26:35.310 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:26:35.310 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:35.310 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:35.311 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:35.311 16:32:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:37.217 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:37.217 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:37.217 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:26:37.217 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:37.217 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:37.217 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:37.217 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:37.217 16:32:27 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:26:37.785 16:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:26:37.785 16:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:37.785 16:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:37.785 16:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:38.044 16:32:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:39.956 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:39.956 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:39.956 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:26:39.956 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:39.956 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:39.956 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:39.956 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:39.956 16:32:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:26:40.898 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:26:40.898 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:40.898 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:40.898 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:40.898 16:32:31 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:42.800 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:42.800 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:42.800 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:42.800 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:42.800 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:42.800 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:42.800 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:42.800 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:43.738 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:43.738 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:43.738 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:43.738 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:43.738 16:32:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:45.643 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:45.643 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:45.643 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:45.643 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:45.643 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:45.643 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:45.643 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:45.643 16:32:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:46.579 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:46.579 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:46.579 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:46.579 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:46.579 16:32:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:49.112 16:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:49.112 16:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:49.112 16:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:49.112 16:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:49.112 16:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:49.112 16:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:49.112 16:32:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:49.112 [global] 00:26:49.112 thread=1 00:26:49.112 invalidate=1 00:26:49.112 rw=read 00:26:49.112 time_based=1 00:26:49.112 runtime=10 00:26:49.112 ioengine=libaio 00:26:49.112 direct=1 00:26:49.112 bs=262144 00:26:49.112 iodepth=64 00:26:49.112 norandommap=1 00:26:49.112 numjobs=1 00:26:49.112 00:26:49.112 [job0] 00:26:49.112 filename=/dev/nvme0n1 00:26:49.112 [job1] 00:26:49.112 filename=/dev/nvme10n1 00:26:49.112 [job2] 00:26:49.112 filename=/dev/nvme1n1 00:26:49.112 [job3] 00:26:49.112 filename=/dev/nvme2n1 00:26:49.112 [job4] 00:26:49.112 filename=/dev/nvme3n1 00:26:49.112 [job5] 00:26:49.112 filename=/dev/nvme4n1 00:26:49.112 [job6] 00:26:49.112 filename=/dev/nvme5n1 00:26:49.112 [job7] 00:26:49.112 filename=/dev/nvme6n1 00:26:49.112 [job8] 00:26:49.112 filename=/dev/nvme7n1 00:26:49.112 [job9] 00:26:49.112 filename=/dev/nvme8n1 00:26:49.112 [job10] 00:26:49.112 filename=/dev/nvme9n1 00:26:49.112 Could not set queue depth (nvme0n1) 00:26:49.112 Could not set queue depth (nvme10n1) 00:26:49.112 Could not set queue depth (nvme1n1) 00:26:49.112 Could not set queue depth (nvme2n1) 00:26:49.112 Could not set queue depth (nvme3n1) 00:26:49.112 Could not set queue depth (nvme4n1) 00:26:49.112 Could not set queue depth (nvme5n1) 00:26:49.112 Could not set queue depth (nvme6n1) 00:26:49.112 Could not set queue depth (nvme7n1) 00:26:49.112 Could not set queue depth (nvme8n1) 00:26:49.112 Could not set queue depth (nvme9n1) 00:26:49.112 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:49.112 fio-3.35 00:26:49.112 Starting 11 threads 00:27:01.320 00:27:01.320 job0: (groupid=0, jobs=1): err= 0: pid=300874: Tue Nov 19 16:32:49 2024 00:27:01.320 read: IOPS=137, BW=34.3MiB/s (35.9MB/s)(346MiB/10081msec) 00:27:01.320 slat (usec): min=12, max=766466, avg=7256.83, stdev=36087.71 00:27:01.320 clat (msec): min=43, max=1812, avg=459.28, stdev=290.00 00:27:01.320 lat (msec): min=53, max=1812, avg=466.54, stdev=294.50 00:27:01.320 clat percentiles (msec): 00:27:01.320 | 1.00th=[ 56], 5.00th=[ 176], 10.00th=[ 205], 20.00th=[ 234], 00:27:01.320 | 30.00th=[ 275], 40.00th=[ 305], 50.00th=[ 363], 60.00th=[ 414], 00:27:01.320 | 70.00th=[ 535], 80.00th=[ 651], 90.00th=[ 1003], 95.00th=[ 1116], 00:27:01.320 | 99.00th=[ 1234], 99.50th=[ 1385], 99.90th=[ 1821], 99.95th=[ 1821], 00:27:01.320 | 99.99th=[ 1821] 00:27:01.320 bw ( KiB/s): min=15872, max=75264, per=7.81%, avg=37489.78, stdev=16197.10, samples=18 00:27:01.320 iops : min= 62, max= 294, avg=146.44, stdev=63.27, samples=18 00:27:01.320 lat (msec) : 50=0.07%, 100=1.88%, 250=20.84%, 500=44.36%, 750=18.52% 00:27:01.320 lat (msec) : 1000=5.21%, 2000=9.12% 00:27:01.320 cpu : usr=0.09%, sys=0.48%, ctx=167, majf=0, minf=3721 00:27:01.320 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.3%, >=64=95.4% 00:27:01.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.320 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.320 issued rwts: total=1382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.320 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.320 job1: (groupid=0, jobs=1): err= 0: pid=300875: Tue Nov 19 16:32:49 2024 00:27:01.320 read: IOPS=150, BW=37.6MiB/s (39.4MB/s)(380MiB/10101msec) 00:27:01.320 slat (usec): min=10, max=663799, avg=5471.24, stdev=29260.05 00:27:01.320 clat (usec): min=1289, max=1279.4k, avg=420058.54, stdev=361174.56 00:27:01.320 lat (usec): min=1340, max=1459.2k, avg=425529.78, stdev=365709.56 00:27:01.320 clat percentiles (usec): 00:27:01.320 | 1.00th=[ 1434], 5.00th=[ 1713], 10.00th=[ 2180], 00:27:01.320 | 20.00th=[ 25297], 30.00th=[ 156238], 40.00th=[ 240124], 00:27:01.320 | 50.00th=[ 329253], 60.00th=[ 467665], 70.00th=[ 641729], 00:27:01.320 | 80.00th=[ 792724], 90.00th=[ 985662], 95.00th=[1098908], 00:27:01.320 | 99.00th=[1182794], 99.50th=[1249903], 99.90th=[1249903], 00:27:01.320 | 99.95th=[1283458], 99.99th=[1283458] 00:27:01.320 bw ( KiB/s): min= 8704, max=218624, per=8.16%, avg=39181.47, stdev=47006.62, samples=19 00:27:01.320 iops : min= 34, max= 854, avg=153.05, stdev=183.62, samples=19 00:27:01.320 lat (msec) : 2=7.84%, 4=5.01%, 10=0.79%, 20=3.75%, 50=8.04% 00:27:01.320 lat (msec) : 100=0.46%, 250=15.28%, 500=21.21%, 750=16.01%, 1000=12.12% 00:27:01.320 lat (msec) : 2000=9.49% 00:27:01.320 cpu : usr=0.14%, sys=0.58%, ctx=458, majf=0, minf=4097 00:27:01.320 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:27:01.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.320 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.320 issued rwts: total=1518,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.320 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.320 job2: (groupid=0, jobs=1): err= 0: pid=300881: Tue Nov 19 16:32:49 2024 00:27:01.320 read: IOPS=125, BW=31.3MiB/s (32.8MB/s)(319MiB/10189msec) 00:27:01.320 slat (usec): min=8, max=703374, avg=4157.20, stdev=32031.84 00:27:01.320 clat (msec): min=56, max=1314, avg=507.24, stdev=315.16 00:27:01.320 lat (msec): min=56, max=1419, avg=511.40, stdev=319.47 00:27:01.320 clat percentiles (msec): 00:27:01.320 | 1.00th=[ 90], 5.00th=[ 167], 10.00th=[ 194], 20.00th=[ 232], 00:27:01.320 | 30.00th=[ 288], 40.00th=[ 351], 50.00th=[ 393], 60.00th=[ 468], 00:27:01.320 | 70.00th=[ 567], 80.00th=[ 894], 90.00th=[ 1036], 95.00th=[ 1099], 00:27:01.320 | 99.00th=[ 1234], 99.50th=[ 1267], 99.90th=[ 1301], 99.95th=[ 1318], 00:27:01.320 | 99.99th=[ 1318] 00:27:01.320 bw ( KiB/s): min=12288, max=66048, per=6.79%, avg=32606.32, stdev=17513.14, samples=19 00:27:01.320 iops : min= 48, max= 258, avg=127.37, stdev=68.41, samples=19 00:27:01.320 lat (msec) : 100=1.41%, 250=21.27%, 500=39.32%, 750=14.68%, 1000=9.97% 00:27:01.320 lat (msec) : 2000=13.34% 00:27:01.320 cpu : usr=0.04%, sys=0.41%, ctx=197, majf=0, minf=4097 00:27:01.320 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.1% 00:27:01.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.320 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.320 issued rwts: total=1274,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.320 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.320 job3: (groupid=0, jobs=1): err= 0: pid=300883: Tue Nov 19 16:32:49 2024 00:27:01.320 read: IOPS=128, BW=32.1MiB/s (33.7MB/s)(325MiB/10104msec) 00:27:01.320 slat (usec): min=12, max=391915, avg=7726.04, stdev=29733.17 00:27:01.320 clat (msec): min=26, max=1290, avg=490.12, stdev=258.14 00:27:01.320 lat (msec): min=26, max=1290, avg=497.85, stdev=261.95 00:27:01.320 clat percentiles (msec): 00:27:01.320 | 1.00th=[ 32], 5.00th=[ 171], 10.00th=[ 241], 20.00th=[ 300], 00:27:01.320 | 30.00th=[ 330], 40.00th=[ 380], 50.00th=[ 418], 60.00th=[ 472], 00:27:01.320 | 70.00th=[ 550], 80.00th=[ 693], 90.00th=[ 902], 95.00th=[ 1028], 00:27:01.320 | 99.00th=[ 1267], 99.50th=[ 1267], 99.90th=[ 1284], 99.95th=[ 1284], 00:27:01.320 | 99.99th=[ 1284] 00:27:01.320 bw ( KiB/s): min=10752, max=60416, per=6.58%, avg=31590.40, stdev=14654.95, samples=20 00:27:01.320 iops : min= 42, max= 236, avg=123.40, stdev=57.25, samples=20 00:27:01.320 lat (msec) : 50=1.69%, 100=0.69%, 250=8.71%, 500=53.39%, 750=19.11% 00:27:01.320 lat (msec) : 1000=9.94%, 2000=6.47% 00:27:01.320 cpu : usr=0.03%, sys=0.50%, ctx=154, majf=0, minf=4098 00:27:01.320 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:27:01.320 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.320 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.320 issued rwts: total=1298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.320 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.320 job4: (groupid=0, jobs=1): err= 0: pid=300884: Tue Nov 19 16:32:49 2024 00:27:01.320 read: IOPS=128, BW=32.0MiB/s (33.6MB/s)(324MiB/10102msec) 00:27:01.320 slat (usec): min=11, max=306976, avg=7747.76, stdev=27283.42 00:27:01.320 clat (msec): min=69, max=1268, avg=491.48, stdev=255.75 00:27:01.320 lat (msec): min=102, max=1275, avg=499.23, stdev=259.57 00:27:01.320 clat percentiles (msec): 00:27:01.320 | 1.00th=[ 104], 5.00th=[ 209], 10.00th=[ 224], 20.00th=[ 262], 00:27:01.320 | 30.00th=[ 300], 40.00th=[ 376], 50.00th=[ 426], 60.00th=[ 502], 00:27:01.320 | 70.00th=[ 575], 80.00th=[ 735], 90.00th=[ 844], 95.00th=[ 969], 00:27:01.320 | 99.00th=[ 1183], 99.50th=[ 1234], 99.90th=[ 1267], 99.95th=[ 1267], 00:27:01.320 | 99.99th=[ 1267] 00:27:01.320 bw ( KiB/s): min=12288, max=67072, per=6.56%, avg=31492.95, stdev=15586.82, samples=20 00:27:01.320 iops : min= 48, max= 262, avg=123.00, stdev=60.86, samples=20 00:27:01.320 lat (msec) : 100=0.08%, 250=18.39%, 500=41.04%, 750=21.41%, 1000=14.37% 00:27:01.320 lat (msec) : 2000=4.71% 00:27:01.320 cpu : usr=0.07%, sys=0.50%, ctx=169, majf=0, minf=4097 00:27:01.321 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.2%, 32=2.5%, >=64=95.1% 00:27:01.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.321 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.321 issued rwts: total=1294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.321 job5: (groupid=0, jobs=1): err= 0: pid=300885: Tue Nov 19 16:32:49 2024 00:27:01.321 read: IOPS=123, BW=30.9MiB/s (32.4MB/s)(315MiB/10190msec) 00:27:01.321 slat (usec): min=12, max=340848, avg=7177.26, stdev=25464.30 00:27:01.321 clat (msec): min=74, max=1286, avg=509.60, stdev=255.39 00:27:01.321 lat (msec): min=74, max=1286, avg=516.78, stdev=258.68 00:27:01.321 clat percentiles (msec): 00:27:01.321 | 1.00th=[ 77], 5.00th=[ 239], 10.00th=[ 275], 20.00th=[ 330], 00:27:01.321 | 30.00th=[ 384], 40.00th=[ 409], 50.00th=[ 426], 60.00th=[ 451], 00:27:01.321 | 70.00th=[ 506], 80.00th=[ 684], 90.00th=[ 961], 95.00th=[ 1062], 00:27:01.321 | 99.00th=[ 1183], 99.50th=[ 1217], 99.90th=[ 1250], 99.95th=[ 1284], 00:27:01.321 | 99.99th=[ 1284] 00:27:01.321 bw ( KiB/s): min= 9728, max=50688, per=6.38%, avg=30643.20, stdev=12650.13, samples=20 00:27:01.321 iops : min= 38, max= 198, avg=119.70, stdev=49.41, samples=20 00:27:01.321 lat (msec) : 100=1.74%, 250=5.15%, 500=62.41%, 750=12.61%, 1000=9.75% 00:27:01.321 lat (msec) : 2000=8.33% 00:27:01.321 cpu : usr=0.07%, sys=0.50%, ctx=198, majf=0, minf=4097 00:27:01.321 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.0% 00:27:01.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.321 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.321 issued rwts: total=1261,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.321 job6: (groupid=0, jobs=1): err= 0: pid=300889: Tue Nov 19 16:32:49 2024 00:27:01.321 read: IOPS=195, BW=48.8MiB/s (51.1MB/s)(497MiB/10189msec) 00:27:01.321 slat (usec): min=8, max=828620, avg=2409.17, stdev=24830.15 00:27:01.321 clat (msec): min=8, max=1300, avg=325.33, stdev=300.52 00:27:01.321 lat (msec): min=8, max=1741, avg=327.74, stdev=303.33 00:27:01.321 clat percentiles (msec): 00:27:01.321 | 1.00th=[ 26], 5.00th=[ 32], 10.00th=[ 41], 20.00th=[ 63], 00:27:01.321 | 30.00th=[ 95], 40.00th=[ 142], 50.00th=[ 236], 60.00th=[ 334], 00:27:01.321 | 70.00th=[ 443], 80.00th=[ 514], 90.00th=[ 844], 95.00th=[ 953], 00:27:01.321 | 99.00th=[ 1217], 99.50th=[ 1301], 99.90th=[ 1301], 99.95th=[ 1301], 00:27:01.321 | 99.99th=[ 1301] 00:27:01.321 bw ( KiB/s): min=14336, max=158720, per=10.80%, avg=51846.74, stdev=34206.15, samples=19 00:27:01.321 iops : min= 56, max= 620, avg=202.53, stdev=133.62, samples=19 00:27:01.321 lat (msec) : 10=0.40%, 50=13.88%, 100=16.25%, 250=21.53%, 500=25.91% 00:27:01.321 lat (msec) : 750=11.07%, 1000=6.94%, 2000=4.02% 00:27:01.321 cpu : usr=0.08%, sys=0.45%, ctx=315, majf=0, minf=4098 00:27:01.321 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:27:01.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.321 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.321 issued rwts: total=1988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.321 job7: (groupid=0, jobs=1): err= 0: pid=300891: Tue Nov 19 16:32:49 2024 00:27:01.321 read: IOPS=167, BW=41.9MiB/s (44.0MB/s)(427MiB/10187msec) 00:27:01.321 slat (usec): min=10, max=503555, avg=5053.47, stdev=24755.40 00:27:01.321 clat (usec): min=1518, max=1363.9k, avg=376076.53, stdev=280971.71 00:27:01.321 lat (usec): min=1864, max=1367.4k, avg=381129.99, stdev=284705.15 00:27:01.321 clat percentiles (msec): 00:27:01.321 | 1.00th=[ 6], 5.00th=[ 32], 10.00th=[ 59], 20.00th=[ 146], 00:27:01.321 | 30.00th=[ 190], 40.00th=[ 234], 50.00th=[ 305], 60.00th=[ 397], 00:27:01.321 | 70.00th=[ 447], 80.00th=[ 584], 90.00th=[ 860], 95.00th=[ 978], 00:27:01.321 | 99.00th=[ 1070], 99.50th=[ 1116], 99.90th=[ 1200], 99.95th=[ 1368], 00:27:01.321 | 99.99th=[ 1368] 00:27:01.321 bw ( KiB/s): min=11776, max=109056, per=9.23%, avg=44328.42, stdev=26275.08, samples=19 00:27:01.321 iops : min= 46, max= 426, avg=173.16, stdev=102.64, samples=19 00:27:01.321 lat (msec) : 2=0.18%, 4=0.29%, 10=3.04%, 20=0.64%, 50=4.74% 00:27:01.321 lat (msec) : 100=6.61%, 250=26.57%, 500=31.01%, 750=14.80%, 1000=8.43% 00:27:01.321 lat (msec) : 2000=3.69% 00:27:01.321 cpu : usr=0.16%, sys=0.59%, ctx=372, majf=0, minf=4097 00:27:01.321 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:27:01.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.321 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.321 issued rwts: total=1709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.321 job8: (groupid=0, jobs=1): err= 0: pid=300892: Tue Nov 19 16:32:49 2024 00:27:01.321 read: IOPS=190, BW=47.5MiB/s (49.9MB/s)(485MiB/10190msec) 00:27:01.321 slat (usec): min=8, max=259712, avg=3585.52, stdev=19253.55 00:27:01.321 clat (usec): min=658, max=1277.9k, avg=332692.93, stdev=321380.37 00:27:01.321 lat (usec): min=708, max=1284.8k, avg=336278.45, stdev=325323.43 00:27:01.321 clat percentiles (usec): 00:27:01.321 | 1.00th=[ 1303], 5.00th=[ 1942], 10.00th=[ 5276], 00:27:01.321 | 20.00th=[ 41681], 30.00th=[ 82314], 40.00th=[ 147850], 00:27:01.321 | 50.00th=[ 223347], 60.00th=[ 354419], 70.00th=[ 467665], 00:27:01.321 | 80.00th=[ 633340], 90.00th=[ 868221], 95.00th=[1002439], 00:27:01.321 | 99.00th=[1132463], 99.50th=[1149240], 99.90th=[1283458], 00:27:01.321 | 99.95th=[1283458], 99.99th=[1283458] 00:27:01.321 bw ( KiB/s): min=11264, max=145408, per=9.99%, avg=47974.40, stdev=38087.07, samples=20 00:27:01.321 iops : min= 44, max= 568, avg=187.40, stdev=148.78, samples=20 00:27:01.321 lat (usec) : 750=0.05%, 1000=0.41% 00:27:01.321 lat (msec) : 2=4.80%, 4=2.06%, 10=10.32%, 20=1.34%, 50=4.85% 00:27:01.321 lat (msec) : 100=7.33%, 250=22.03%, 500=20.28%, 750=11.97%, 1000=9.91% 00:27:01.321 lat (msec) : 2000=4.64% 00:27:01.321 cpu : usr=0.05%, sys=0.58%, ctx=694, majf=0, minf=4097 00:27:01.321 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:27:01.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.321 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.321 issued rwts: total=1938,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.321 job9: (groupid=0, jobs=1): err= 0: pid=300893: Tue Nov 19 16:32:49 2024 00:27:01.321 read: IOPS=152, BW=38.2MiB/s (40.0MB/s)(385MiB/10077msec) 00:27:01.321 slat (usec): min=12, max=448485, avg=6496.91, stdev=27708.70 00:27:01.321 clat (msec): min=69, max=1483, avg=412.27, stdev=271.02 00:27:01.321 lat (msec): min=101, max=1483, avg=418.77, stdev=275.18 00:27:01.321 clat percentiles (msec): 00:27:01.321 | 1.00th=[ 138], 5.00th=[ 159], 10.00th=[ 171], 20.00th=[ 197], 00:27:01.321 | 30.00th=[ 234], 40.00th=[ 271], 50.00th=[ 317], 60.00th=[ 409], 00:27:01.321 | 70.00th=[ 451], 80.00th=[ 575], 90.00th=[ 776], 95.00th=[ 1116], 00:27:01.321 | 99.00th=[ 1318], 99.50th=[ 1318], 99.90th=[ 1318], 99.95th=[ 1485], 00:27:01.321 | 99.99th=[ 1485] 00:27:01.321 bw ( KiB/s): min= 7168, max=85504, per=7.87%, avg=37785.60, stdev=24148.76, samples=20 00:27:01.321 iops : min= 28, max= 334, avg=147.60, stdev=94.33, samples=20 00:27:01.321 lat (msec) : 100=0.06%, 250=33.98%, 500=41.91%, 750=13.52%, 1000=3.44% 00:27:01.321 lat (msec) : 2000=7.08% 00:27:01.321 cpu : usr=0.09%, sys=0.53%, ctx=187, majf=0, minf=4097 00:27:01.321 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:27:01.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.321 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.321 issued rwts: total=1539,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.321 job10: (groupid=0, jobs=1): err= 0: pid=300894: Tue Nov 19 16:32:49 2024 00:27:01.321 read: IOPS=383, BW=95.9MiB/s (101MB/s)(978MiB/10191msec) 00:27:01.321 slat (usec): min=12, max=263770, avg=2282.04, stdev=11628.63 00:27:01.321 clat (msec): min=23, max=1261, avg=164.34, stdev=193.04 00:27:01.321 lat (msec): min=24, max=1377, avg=166.62, stdev=195.75 00:27:01.321 clat percentiles (msec): 00:27:01.321 | 1.00th=[ 32], 5.00th=[ 37], 10.00th=[ 40], 20.00th=[ 43], 00:27:01.321 | 30.00th=[ 51], 40.00th=[ 63], 50.00th=[ 90], 60.00th=[ 123], 00:27:01.321 | 70.00th=[ 163], 80.00th=[ 224], 90.00th=[ 401], 95.00th=[ 523], 00:27:01.321 | 99.00th=[ 1020], 99.50th=[ 1116], 99.90th=[ 1200], 99.95th=[ 1200], 00:27:01.321 | 99.99th=[ 1267] 00:27:01.321 bw ( KiB/s): min= 6656, max=338432, per=20.51%, avg=98483.20, stdev=98895.52, samples=20 00:27:01.321 iops : min= 26, max= 1322, avg=384.70, stdev=386.31, samples=20 00:27:01.321 lat (msec) : 50=29.79%, 100=24.78%, 250=27.15%, 500=12.81%, 750=2.33% 00:27:01.321 lat (msec) : 1000=1.97%, 2000=1.18% 00:27:01.321 cpu : usr=0.17%, sys=1.45%, ctx=550, majf=0, minf=4097 00:27:01.321 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:27:01.321 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:01.321 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:01.321 issued rwts: total=3911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:01.321 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:01.321 00:27:01.321 Run status group 0 (all jobs): 00:27:01.321 READ: bw=469MiB/s (492MB/s), 30.9MiB/s-95.9MiB/s (32.4MB/s-101MB/s), io=4778MiB (5010MB), run=10077-10191msec 00:27:01.321 00:27:01.321 Disk stats (read/write): 00:27:01.321 nvme0n1: ios=2616/0, merge=0/0, ticks=1220019/0, in_queue=1220019, util=97.30% 00:27:01.321 nvme10n1: ios=2850/0, merge=0/0, ticks=1228626/0, in_queue=1228626, util=97.48% 00:27:01.321 nvme1n1: ios=2408/0, merge=0/0, ticks=1209148/0, in_queue=1209148, util=97.75% 00:27:01.321 nvme2n1: ios=2450/0, merge=0/0, ticks=1226211/0, in_queue=1226211, util=97.89% 00:27:01.321 nvme3n1: ios=2434/0, merge=0/0, ticks=1232319/0, in_queue=1232319, util=97.95% 00:27:01.321 nvme4n1: ios=2383/0, merge=0/0, ticks=1170297/0, in_queue=1170297, util=98.27% 00:27:01.321 nvme5n1: ios=3817/0, merge=0/0, ticks=1239850/0, in_queue=1239850, util=98.42% 00:27:01.321 nvme6n1: ios=3266/0, merge=0/0, ticks=1175926/0, in_queue=1175926, util=98.52% 00:27:01.321 nvme7n1: ios=3740/0, merge=0/0, ticks=1180693/0, in_queue=1180693, util=98.95% 00:27:01.321 nvme8n1: ios=2922/0, merge=0/0, ticks=1227852/0, in_queue=1227852, util=99.11% 00:27:01.321 nvme9n1: ios=7695/0, merge=0/0, ticks=1162124/0, in_queue=1162124, util=99.23% 00:27:01.322 16:32:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:27:01.322 [global] 00:27:01.322 thread=1 00:27:01.322 invalidate=1 00:27:01.322 rw=randwrite 00:27:01.322 time_based=1 00:27:01.322 runtime=10 00:27:01.322 ioengine=libaio 00:27:01.322 direct=1 00:27:01.322 bs=262144 00:27:01.322 iodepth=64 00:27:01.322 norandommap=1 00:27:01.322 numjobs=1 00:27:01.322 00:27:01.322 [job0] 00:27:01.322 filename=/dev/nvme0n1 00:27:01.322 [job1] 00:27:01.322 filename=/dev/nvme10n1 00:27:01.322 [job2] 00:27:01.322 filename=/dev/nvme1n1 00:27:01.322 [job3] 00:27:01.322 filename=/dev/nvme2n1 00:27:01.322 [job4] 00:27:01.322 filename=/dev/nvme3n1 00:27:01.322 [job5] 00:27:01.322 filename=/dev/nvme4n1 00:27:01.322 [job6] 00:27:01.322 filename=/dev/nvme5n1 00:27:01.322 [job7] 00:27:01.322 filename=/dev/nvme6n1 00:27:01.322 [job8] 00:27:01.322 filename=/dev/nvme7n1 00:27:01.322 [job9] 00:27:01.322 filename=/dev/nvme8n1 00:27:01.322 [job10] 00:27:01.322 filename=/dev/nvme9n1 00:27:01.322 Could not set queue depth (nvme0n1) 00:27:01.322 Could not set queue depth (nvme10n1) 00:27:01.322 Could not set queue depth (nvme1n1) 00:27:01.322 Could not set queue depth (nvme2n1) 00:27:01.322 Could not set queue depth (nvme3n1) 00:27:01.322 Could not set queue depth (nvme4n1) 00:27:01.322 Could not set queue depth (nvme5n1) 00:27:01.322 Could not set queue depth (nvme6n1) 00:27:01.322 Could not set queue depth (nvme7n1) 00:27:01.322 Could not set queue depth (nvme8n1) 00:27:01.322 Could not set queue depth (nvme9n1) 00:27:01.322 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:27:01.322 fio-3.35 00:27:01.322 Starting 11 threads 00:27:11.299 00:27:11.299 job0: (groupid=0, jobs=1): err= 0: pid=301487: Tue Nov 19 16:33:00 2024 00:27:11.299 write: IOPS=186, BW=46.6MiB/s (48.9MB/s)(475MiB/10184msec); 0 zone resets 00:27:11.299 slat (usec): min=18, max=230481, avg=4359.14, stdev=12917.28 00:27:11.299 clat (msec): min=19, max=1029, avg=338.67, stdev=234.16 00:27:11.299 lat (msec): min=19, max=1029, avg=343.03, stdev=236.85 00:27:11.299 clat percentiles (msec): 00:27:11.299 | 1.00th=[ 29], 5.00th=[ 59], 10.00th=[ 107], 20.00th=[ 144], 00:27:11.299 | 30.00th=[ 157], 40.00th=[ 171], 50.00th=[ 234], 60.00th=[ 393], 00:27:11.299 | 70.00th=[ 468], 80.00th=[ 558], 90.00th=[ 701], 95.00th=[ 785], 00:27:11.299 | 99.00th=[ 911], 99.50th=[ 969], 99.90th=[ 1003], 99.95th=[ 1028], 00:27:11.299 | 99.99th=[ 1028] 00:27:11.299 bw ( KiB/s): min=18944, max=116736, per=5.64%, avg=47005.90, stdev=30606.75, samples=20 00:27:11.299 iops : min= 74, max= 456, avg=183.60, stdev=119.56, samples=20 00:27:11.299 lat (msec) : 20=0.16%, 50=2.74%, 100=5.69%, 250=42.23%, 500=24.38% 00:27:11.299 lat (msec) : 750=18.33%, 1000=6.21%, 2000=0.26% 00:27:11.299 cpu : usr=0.58%, sys=0.58%, ctx=755, majf=0, minf=1 00:27:11.299 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:27:11.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.299 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.299 issued rwts: total=0,1899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.299 job1: (groupid=0, jobs=1): err= 0: pid=301497: Tue Nov 19 16:33:00 2024 00:27:11.299 write: IOPS=282, BW=70.7MiB/s (74.1MB/s)(718MiB/10164msec); 0 zone resets 00:27:11.299 slat (usec): min=18, max=243586, avg=2768.19, stdev=8853.59 00:27:11.299 clat (usec): min=1450, max=879940, avg=223527.96, stdev=196687.96 00:27:11.299 lat (usec): min=1486, max=880002, avg=226296.15, stdev=198780.35 00:27:11.299 clat percentiles (msec): 00:27:11.299 | 1.00th=[ 5], 5.00th=[ 39], 10.00th=[ 81], 20.00th=[ 97], 00:27:11.299 | 30.00th=[ 108], 40.00th=[ 115], 50.00th=[ 131], 60.00th=[ 169], 00:27:11.299 | 70.00th=[ 215], 80.00th=[ 368], 90.00th=[ 550], 95.00th=[ 676], 00:27:11.299 | 99.00th=[ 827], 99.50th=[ 860], 99.90th=[ 877], 99.95th=[ 877], 00:27:11.299 | 99.99th=[ 877] 00:27:11.299 bw ( KiB/s): min=20480, max=150016, per=8.62%, avg=71936.00, stdev=44359.63, samples=20 00:27:11.299 iops : min= 80, max= 586, avg=281.00, stdev=173.28, samples=20 00:27:11.299 lat (msec) : 2=0.10%, 4=0.84%, 10=0.94%, 20=1.18%, 50=3.79% 00:27:11.299 lat (msec) : 100=15.59%, 250=50.30%, 500=14.55%, 750=10.30%, 1000=2.40% 00:27:11.299 cpu : usr=0.77%, sys=0.82%, ctx=1212, majf=0, minf=1 00:27:11.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:11.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.299 issued rwts: total=0,2873,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.299 job2: (groupid=0, jobs=1): err= 0: pid=301500: Tue Nov 19 16:33:00 2024 00:27:11.299 write: IOPS=249, BW=62.4MiB/s (65.5MB/s)(640MiB/10243msec); 0 zone resets 00:27:11.299 slat (usec): min=15, max=170984, avg=2372.05, stdev=8867.00 00:27:11.299 clat (usec): min=1140, max=946969, avg=253760.23, stdev=203477.94 00:27:11.299 lat (usec): min=1753, max=950304, avg=256132.28, stdev=205491.22 00:27:11.299 clat percentiles (msec): 00:27:11.299 | 1.00th=[ 11], 5.00th=[ 29], 10.00th=[ 59], 20.00th=[ 97], 00:27:11.299 | 30.00th=[ 136], 40.00th=[ 155], 50.00th=[ 180], 60.00th=[ 213], 00:27:11.299 | 70.00th=[ 271], 80.00th=[ 443], 90.00th=[ 600], 95.00th=[ 676], 00:27:11.299 | 99.00th=[ 835], 99.50th=[ 894], 99.90th=[ 944], 99.95th=[ 944], 00:27:11.299 | 99.99th=[ 944] 00:27:11.299 bw ( KiB/s): min=20480, max=112128, per=7.65%, avg=63846.40, stdev=29814.75, samples=20 00:27:11.299 iops : min= 80, max= 438, avg=249.40, stdev=116.46, samples=20 00:27:11.299 lat (msec) : 2=0.08%, 4=0.35%, 10=0.51%, 20=2.35%, 50=4.69% 00:27:11.299 lat (msec) : 100=12.71%, 250=47.11%, 500=16.30%, 750=13.37%, 1000=2.54% 00:27:11.299 cpu : usr=0.58%, sys=0.97%, ctx=1515, majf=0, minf=1 00:27:11.299 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:27:11.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.299 issued rwts: total=0,2558,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.299 job3: (groupid=0, jobs=1): err= 0: pid=301501: Tue Nov 19 16:33:00 2024 00:27:11.299 write: IOPS=282, BW=70.7MiB/s (74.1MB/s)(720MiB/10177msec); 0 zone resets 00:27:11.299 slat (usec): min=17, max=207731, avg=1931.97, stdev=8692.12 00:27:11.299 clat (usec): min=925, max=826141, avg=224262.20, stdev=192559.55 00:27:11.299 lat (usec): min=1037, max=830298, avg=226194.18, stdev=194932.98 00:27:11.299 clat percentiles (msec): 00:27:11.299 | 1.00th=[ 4], 5.00th=[ 26], 10.00th=[ 42], 20.00th=[ 67], 00:27:11.299 | 30.00th=[ 87], 40.00th=[ 113], 50.00th=[ 153], 60.00th=[ 201], 00:27:11.299 | 70.00th=[ 292], 80.00th=[ 405], 90.00th=[ 535], 95.00th=[ 609], 00:27:11.299 | 99.00th=[ 760], 99.50th=[ 785], 99.90th=[ 818], 99.95th=[ 818], 00:27:11.299 | 99.99th=[ 827] 00:27:11.299 bw ( KiB/s): min=20480, max=172032, per=8.64%, avg=72067.45, stdev=49370.48, samples=20 00:27:11.299 iops : min= 80, max= 672, avg=281.50, stdev=192.86, samples=20 00:27:11.299 lat (usec) : 1000=0.03% 00:27:11.299 lat (msec) : 2=0.66%, 4=0.35%, 10=0.83%, 20=1.60%, 50=8.55% 00:27:11.299 lat (msec) : 100=22.97%, 250=30.96%, 500=22.20%, 750=10.15%, 1000=1.70% 00:27:11.299 cpu : usr=0.86%, sys=0.99%, ctx=1986, majf=0, minf=1 00:27:11.299 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:11.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.299 issued rwts: total=0,2878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.299 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.299 job4: (groupid=0, jobs=1): err= 0: pid=301502: Tue Nov 19 16:33:00 2024 00:27:11.299 write: IOPS=360, BW=90.2MiB/s (94.6MB/s)(910MiB/10086msec); 0 zone resets 00:27:11.299 slat (usec): min=20, max=184910, avg=2026.92, stdev=7555.64 00:27:11.299 clat (usec): min=1126, max=836629, avg=175196.24, stdev=144693.35 00:27:11.299 lat (usec): min=1157, max=836665, avg=177223.16, stdev=146696.00 00:27:11.299 clat percentiles (msec): 00:27:11.300 | 1.00th=[ 11], 5.00th=[ 34], 10.00th=[ 55], 20.00th=[ 92], 00:27:11.300 | 30.00th=[ 114], 40.00th=[ 130], 50.00th=[ 140], 60.00th=[ 150], 00:27:11.300 | 70.00th=[ 161], 80.00th=[ 213], 90.00th=[ 347], 95.00th=[ 542], 00:27:11.300 | 99.00th=[ 768], 99.50th=[ 802], 99.90th=[ 835], 99.95th=[ 835], 00:27:11.300 | 99.99th=[ 835] 00:27:11.300 bw ( KiB/s): min=18432, max=171520, per=10.98%, avg=91556.15, stdev=48056.53, samples=20 00:27:11.300 iops : min= 72, max= 670, avg=357.60, stdev=187.71, samples=20 00:27:11.300 lat (msec) : 2=0.14%, 4=0.19%, 10=0.66%, 20=2.17%, 50=5.77% 00:27:11.300 lat (msec) : 100=14.92%, 250=60.32%, 500=10.09%, 750=4.53%, 1000=1.21% 00:27:11.300 cpu : usr=1.19%, sys=1.21%, ctx=1864, majf=0, minf=1 00:27:11.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:27:11.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.300 issued rwts: total=0,3639,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.300 job5: (groupid=0, jobs=1): err= 0: pid=301503: Tue Nov 19 16:33:00 2024 00:27:11.300 write: IOPS=317, BW=79.4MiB/s (83.3MB/s)(813MiB/10242msec); 0 zone resets 00:27:11.300 slat (usec): min=20, max=96361, avg=2234.12, stdev=7451.33 00:27:11.300 clat (usec): min=1518, max=1013.5k, avg=199152.70, stdev=207952.71 00:27:11.300 lat (usec): min=1584, max=1013.6k, avg=201386.82, stdev=209772.07 00:27:11.300 clat percentiles (msec): 00:27:11.300 | 1.00th=[ 13], 5.00th=[ 42], 10.00th=[ 43], 20.00th=[ 48], 00:27:11.300 | 30.00th=[ 90], 40.00th=[ 97], 50.00th=[ 102], 60.00th=[ 114], 00:27:11.300 | 70.00th=[ 211], 80.00th=[ 347], 90.00th=[ 502], 95.00th=[ 676], 00:27:11.300 | 99.00th=[ 885], 99.50th=[ 944], 99.90th=[ 1003], 99.95th=[ 1011], 00:27:11.300 | 99.99th=[ 1011] 00:27:11.300 bw ( KiB/s): min=16384, max=268800, per=9.79%, avg=81652.95, stdev=69098.02, samples=20 00:27:11.300 iops : min= 64, max= 1050, avg=318.95, stdev=269.91, samples=20 00:27:11.300 lat (msec) : 2=0.03%, 4=0.28%, 10=0.25%, 20=1.97%, 50=18.04% 00:27:11.300 lat (msec) : 100=26.65%, 250=25.76%, 500=16.88%, 750=6.61%, 1000=3.41% 00:27:11.300 lat (msec) : 2000=0.12% 00:27:11.300 cpu : usr=0.96%, sys=0.87%, ctx=1349, majf=0, minf=1 00:27:11.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:27:11.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.300 issued rwts: total=0,3253,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.300 job6: (groupid=0, jobs=1): err= 0: pid=301504: Tue Nov 19 16:33:00 2024 00:27:11.300 write: IOPS=445, BW=111MiB/s (117MB/s)(1136MiB/10203msec); 0 zone resets 00:27:11.300 slat (usec): min=17, max=109488, avg=1399.50, stdev=4143.72 00:27:11.300 clat (usec): min=1113, max=761441, avg=142244.24, stdev=118816.30 00:27:11.300 lat (usec): min=1181, max=761486, avg=143643.74, stdev=119328.12 00:27:11.300 clat percentiles (msec): 00:27:11.300 | 1.00th=[ 4], 5.00th=[ 29], 10.00th=[ 50], 20.00th=[ 59], 00:27:11.300 | 30.00th=[ 68], 40.00th=[ 92], 50.00th=[ 101], 60.00th=[ 124], 00:27:11.300 | 70.00th=[ 157], 80.00th=[ 215], 90.00th=[ 309], 95.00th=[ 401], 00:27:11.300 | 99.00th=[ 600], 99.50th=[ 659], 99.90th=[ 751], 99.95th=[ 751], 00:27:11.300 | 99.99th=[ 760] 00:27:11.300 bw ( KiB/s): min=39936, max=316416, per=13.75%, avg=114696.05, stdev=67188.22, samples=20 00:27:11.300 iops : min= 156, max= 1236, avg=448.00, stdev=262.47, samples=20 00:27:11.300 lat (msec) : 2=0.33%, 4=0.84%, 10=1.54%, 20=1.06%, 50=7.94% 00:27:11.300 lat (msec) : 100=39.17%, 250=35.08%, 500=12.28%, 750=1.69%, 1000=0.07% 00:27:11.300 cpu : usr=1.31%, sys=1.33%, ctx=2149, majf=0, minf=1 00:27:11.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:27:11.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.300 issued rwts: total=0,4544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.300 job7: (groupid=0, jobs=1): err= 0: pid=301505: Tue Nov 19 16:33:00 2024 00:27:11.300 write: IOPS=296, BW=74.1MiB/s (77.7MB/s)(759MiB/10235msec); 0 zone resets 00:27:11.300 slat (usec): min=24, max=100427, avg=2850.53, stdev=8023.90 00:27:11.300 clat (msec): min=43, max=952, avg=212.84, stdev=175.63 00:27:11.300 lat (msec): min=43, max=952, avg=215.70, stdev=177.52 00:27:11.300 clat percentiles (msec): 00:27:11.300 | 1.00th=[ 46], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 56], 00:27:11.300 | 30.00th=[ 96], 40.00th=[ 153], 50.00th=[ 171], 60.00th=[ 190], 00:27:11.300 | 70.00th=[ 224], 80.00th=[ 296], 90.00th=[ 514], 95.00th=[ 584], 00:27:11.300 | 99.00th=[ 768], 99.50th=[ 818], 99.90th=[ 911], 99.95th=[ 953], 00:27:11.300 | 99.99th=[ 953] 00:27:11.300 bw ( KiB/s): min=20480, max=210432, per=9.12%, avg=76057.60, stdev=57503.17, samples=20 00:27:11.300 iops : min= 80, max= 822, avg=297.10, stdev=224.62, samples=20 00:27:11.300 lat (msec) : 50=7.68%, 100=23.29%, 250=44.45%, 500=13.61%, 750=9.46% 00:27:11.300 lat (msec) : 1000=1.52% 00:27:11.300 cpu : usr=0.94%, sys=1.05%, ctx=924, majf=0, minf=1 00:27:11.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:27:11.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.300 issued rwts: total=0,3035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.300 job8: (groupid=0, jobs=1): err= 0: pid=301506: Tue Nov 19 16:33:00 2024 00:27:11.300 write: IOPS=309, BW=77.5MiB/s (81.2MB/s)(789MiB/10182msec); 0 zone resets 00:27:11.300 slat (usec): min=24, max=247184, avg=1916.02, stdev=7718.31 00:27:11.300 clat (msec): min=2, max=982, avg=204.47, stdev=172.86 00:27:11.300 lat (msec): min=2, max=982, avg=206.38, stdev=174.76 00:27:11.300 clat percentiles (msec): 00:27:11.300 | 1.00th=[ 12], 5.00th=[ 31], 10.00th=[ 42], 20.00th=[ 66], 00:27:11.300 | 30.00th=[ 77], 40.00th=[ 106], 50.00th=[ 150], 60.00th=[ 192], 00:27:11.300 | 70.00th=[ 257], 80.00th=[ 351], 90.00th=[ 468], 95.00th=[ 550], 00:27:11.300 | 99.00th=[ 726], 99.50th=[ 827], 99.90th=[ 961], 99.95th=[ 969], 00:27:11.300 | 99.99th=[ 986] 00:27:11.300 bw ( KiB/s): min=28672, max=202752, per=9.49%, avg=79129.60, stdev=49568.61, samples=20 00:27:11.300 iops : min= 112, max= 792, avg=309.10, stdev=193.63, samples=20 00:27:11.300 lat (msec) : 4=0.06%, 10=0.41%, 20=2.54%, 50=10.94%, 100=24.85% 00:27:11.300 lat (msec) : 250=30.27%, 500=23.68%, 750=6.43%, 1000=0.82% 00:27:11.300 cpu : usr=0.98%, sys=1.28%, ctx=1927, majf=0, minf=1 00:27:11.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:27:11.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.300 issued rwts: total=0,3155,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.300 job9: (groupid=0, jobs=1): err= 0: pid=301507: Tue Nov 19 16:33:00 2024 00:27:11.300 write: IOPS=274, BW=68.7MiB/s (72.0MB/s)(703MiB/10243msec); 0 zone resets 00:27:11.300 slat (usec): min=26, max=219244, avg=2534.52, stdev=8903.83 00:27:11.300 clat (msec): min=16, max=939, avg=230.27, stdev=151.42 00:27:11.300 lat (msec): min=16, max=939, avg=232.80, stdev=153.10 00:27:11.300 clat percentiles (msec): 00:27:11.300 | 1.00th=[ 45], 5.00th=[ 79], 10.00th=[ 99], 20.00th=[ 128], 00:27:11.300 | 30.00th=[ 144], 40.00th=[ 161], 50.00th=[ 178], 60.00th=[ 201], 00:27:11.300 | 70.00th=[ 241], 80.00th=[ 326], 90.00th=[ 447], 95.00th=[ 567], 00:27:11.300 | 99.00th=[ 760], 99.50th=[ 827], 99.90th=[ 902], 99.95th=[ 944], 00:27:11.300 | 99.99th=[ 944] 00:27:11.300 bw ( KiB/s): min=20480, max=118784, per=8.44%, avg=70378.50, stdev=32086.51, samples=20 00:27:11.300 iops : min= 80, max= 464, avg=274.90, stdev=125.35, samples=20 00:27:11.300 lat (msec) : 20=0.04%, 50=1.60%, 100=9.03%, 250=60.72%, 500=20.51% 00:27:11.300 lat (msec) : 750=7.07%, 1000=1.03% 00:27:11.300 cpu : usr=0.82%, sys=1.11%, ctx=1560, majf=0, minf=1 00:27:11.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:27:11.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.300 issued rwts: total=0,2813,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.300 job10: (groupid=0, jobs=1): err= 0: pid=301508: Tue Nov 19 16:33:00 2024 00:27:11.300 write: IOPS=267, BW=66.8MiB/s (70.0MB/s)(682MiB/10212msec); 0 zone resets 00:27:11.300 slat (usec): min=17, max=259996, avg=2285.16, stdev=10320.19 00:27:11.300 clat (usec): min=734, max=936944, avg=237161.47, stdev=239558.20 00:27:11.300 lat (usec): min=757, max=968629, avg=239446.63, stdev=242426.56 00:27:11.300 clat percentiles (usec): 00:27:11.300 | 1.00th=[ 1844], 5.00th=[ 4113], 10.00th=[ 6587], 20.00th=[ 11469], 00:27:11.300 | 30.00th=[ 21627], 40.00th=[111674], 50.00th=[154141], 60.00th=[214959], 00:27:11.300 | 70.00th=[383779], 80.00th=[484443], 90.00th=[583009], 95.00th=[692061], 00:27:11.300 | 99.00th=[868221], 99.50th=[910164], 99.90th=[935330], 99.95th=[935330], 00:27:11.300 | 99.99th=[935330] 00:27:11.300 bw ( KiB/s): min=16384, max=225792, per=8.18%, avg=68224.00, stdev=51976.10, samples=20 00:27:11.300 iops : min= 64, max= 882, avg=266.50, stdev=203.03, samples=20 00:27:11.300 lat (usec) : 750=0.04%, 1000=0.22% 00:27:11.300 lat (msec) : 2=1.06%, 4=3.56%, 10=11.55%, 20=12.13%, 50=6.49% 00:27:11.300 lat (msec) : 100=4.29%, 250=22.62%, 500=19.76%, 750=15.14%, 1000=3.15% 00:27:11.300 cpu : usr=0.64%, sys=1.07%, ctx=2024, majf=0, minf=1 00:27:11.300 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:27:11.300 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:11.300 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:27:11.300 issued rwts: total=0,2728,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:11.300 latency : target=0, window=0, percentile=100.00%, depth=64 00:27:11.300 00:27:11.300 Run status group 0 (all jobs): 00:27:11.300 WRITE: bw=815MiB/s (854MB/s), 46.6MiB/s-111MiB/s (48.9MB/s-117MB/s), io=8344MiB (8749MB), run=10086-10243msec 00:27:11.300 00:27:11.300 Disk stats (read/write): 00:27:11.300 nvme0n1: ios=49/3648, merge=0/0, ticks=236/1182774, in_queue=1183010, util=99.39% 00:27:11.300 nvme10n1: ios=47/5533, merge=0/0, ticks=43/1187712, in_queue=1187755, util=97.61% 00:27:11.300 nvme1n1: ios=46/5066, merge=0/0, ticks=1709/1237745, in_queue=1239454, util=100.00% 00:27:11.301 nvme2n1: ios=0/5591, merge=0/0, ticks=0/1225282, in_queue=1225282, util=97.83% 00:27:11.301 nvme3n1: ios=43/7102, merge=0/0, ticks=1678/1198684, in_queue=1200362, util=100.00% 00:27:11.301 nvme4n1: ios=0/6456, merge=0/0, ticks=0/1235422, in_queue=1235422, util=98.25% 00:27:11.301 nvme5n1: ios=0/9048, merge=0/0, ticks=0/1251172, in_queue=1251172, util=98.40% 00:27:11.301 nvme6n1: ios=30/6021, merge=0/0, ticks=107/1228840, in_queue=1228947, util=99.17% 00:27:11.301 nvme7n1: ios=25/6171, merge=0/0, ticks=1074/1212442, in_queue=1213516, util=99.92% 00:27:11.301 nvme8n1: ios=41/5573, merge=0/0, ticks=1925/1236161, in_queue=1238086, util=100.00% 00:27:11.301 nvme9n1: ios=0/5429, merge=0/0, ticks=0/1246371, in_queue=1246371, util=99.07% 00:27:11.301 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:27:11.301 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:27:11.301 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:11.301 16:33:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:11.301 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:27:11.301 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:11.301 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:27:11.561 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:11.561 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:27:11.820 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:27:11.820 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:27:11.820 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:11.820 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:11.820 16:33:01 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:27:11.820 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:11.820 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:27:11.820 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:11.820 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:27:11.820 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.820 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:11.820 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.820 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:11.820 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:27:12.079 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:12.079 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:27:12.337 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:12.337 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:27:12.596 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:27:12.596 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:27:12.596 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.855 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:12.855 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.855 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:12.855 16:33:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:27:12.855 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:27:12.855 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:27:12.855 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:12.855 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:12.855 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:27:12.856 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:12.856 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:27:12.856 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:12.856 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:27:12.856 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.856 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:12.856 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.856 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:12.856 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:27:13.117 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:27:13.117 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:13.117 rmmod nvme_tcp 00:27:13.117 rmmod nvme_fabrics 00:27:13.117 rmmod nvme_keyring 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 296616 ']' 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 296616 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 296616 ']' 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 296616 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296616 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296616' 00:27:13.117 killing process with pid 296616 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 296616 00:27:13.117 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 296616 00:27:13.687 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:13.687 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:13.687 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:13.687 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:27:13.687 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:27:13.687 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:13.688 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:27:13.688 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:13.688 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:13.688 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.688 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.688 16:33:03 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.225 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:16.225 00:27:16.225 real 1m1.246s 00:27:16.225 user 3m35.223s 00:27:16.225 sys 0m15.187s 00:27:16.225 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.225 16:33:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:27:16.225 ************************************ 00:27:16.225 END TEST nvmf_multiconnection 00:27:16.225 ************************************ 00:27:16.225 16:33:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:16.225 16:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:16.225 16:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.225 16:33:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:16.225 ************************************ 00:27:16.225 START TEST nvmf_initiator_timeout 00:27:16.225 ************************************ 00:27:16.225 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:27:16.225 * Looking for test storage... 00:27:16.225 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:16.225 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:16.225 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:27:16.225 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:16.225 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:16.225 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:16.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.226 --rc genhtml_branch_coverage=1 00:27:16.226 --rc genhtml_function_coverage=1 00:27:16.226 --rc genhtml_legend=1 00:27:16.226 --rc geninfo_all_blocks=1 00:27:16.226 --rc geninfo_unexecuted_blocks=1 00:27:16.226 00:27:16.226 ' 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:16.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.226 --rc genhtml_branch_coverage=1 00:27:16.226 --rc genhtml_function_coverage=1 00:27:16.226 --rc genhtml_legend=1 00:27:16.226 --rc geninfo_all_blocks=1 00:27:16.226 --rc geninfo_unexecuted_blocks=1 00:27:16.226 00:27:16.226 ' 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:16.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.226 --rc genhtml_branch_coverage=1 00:27:16.226 --rc genhtml_function_coverage=1 00:27:16.226 --rc genhtml_legend=1 00:27:16.226 --rc geninfo_all_blocks=1 00:27:16.226 --rc geninfo_unexecuted_blocks=1 00:27:16.226 00:27:16.226 ' 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:16.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.226 --rc genhtml_branch_coverage=1 00:27:16.226 --rc genhtml_function_coverage=1 00:27:16.226 --rc genhtml_legend=1 00:27:16.226 --rc geninfo_all_blocks=1 00:27:16.226 --rc geninfo_unexecuted_blocks=1 00:27:16.226 00:27:16.226 ' 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:16.226 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:27:16.226 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:27:16.227 16:33:06 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:18.127 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:18.127 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:18.127 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:18.128 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:18.128 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:18.128 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:18.128 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:27:18.128 00:27:18.128 --- 10.0.0.2 ping statistics --- 00:27:18.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.128 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:18.128 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:18.128 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:27:18.128 00:27:18.128 --- 10.0.0.1 ping statistics --- 00:27:18.128 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:18.128 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=305282 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 305282 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 305282 ']' 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:18.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:18.128 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.386 [2024-11-19 16:33:08.497565] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:27:18.386 [2024-11-19 16:33:08.497642] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:18.386 [2024-11-19 16:33:08.570813] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:18.386 [2024-11-19 16:33:08.617256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:18.386 [2024-11-19 16:33:08.617307] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:18.386 [2024-11-19 16:33:08.617330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:18.386 [2024-11-19 16:33:08.617341] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:18.386 [2024-11-19 16:33:08.617351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:18.386 [2024-11-19 16:33:08.618978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:18.386 [2024-11-19 16:33:08.619005] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:18.386 [2024-11-19 16:33:08.619097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:18.386 [2024-11-19 16:33:08.619102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:18.644 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:18.644 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.645 Malloc0 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.645 Delay0 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.645 [2024-11-19 16:33:08.806886] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:18.645 [2024-11-19 16:33:08.835186] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.645 16:33:08 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:27:19.211 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:27:19.211 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:27:19.211 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:27:19.211 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:27:19.211 16:33:09 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:27:21.114 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:27:21.114 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:27:21.114 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:27:21.114 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:27:21.114 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:27:21.114 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:27:21.114 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=305718 00:27:21.114 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:27:21.114 16:33:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:27:21.373 [global] 00:27:21.373 thread=1 00:27:21.373 invalidate=1 00:27:21.373 rw=write 00:27:21.373 time_based=1 00:27:21.373 runtime=60 00:27:21.373 ioengine=libaio 00:27:21.373 direct=1 00:27:21.373 bs=4096 00:27:21.373 iodepth=1 00:27:21.373 norandommap=0 00:27:21.373 numjobs=1 00:27:21.373 00:27:21.373 verify_dump=1 00:27:21.373 verify_backlog=512 00:27:21.373 verify_state_save=0 00:27:21.373 do_verify=1 00:27:21.373 verify=crc32c-intel 00:27:21.373 [job0] 00:27:21.373 filename=/dev/nvme0n1 00:27:21.373 Could not set queue depth (nvme0n1) 00:27:21.373 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:27:21.373 fio-3.35 00:27:21.373 Starting 1 thread 00:27:24.661 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:27:24.661 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.661 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:24.661 true 00:27:24.661 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.661 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:27:24.661 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.661 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:24.661 true 00:27:24.662 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.662 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:27:24.662 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.662 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:24.662 true 00:27:24.662 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.662 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:27:24.662 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.662 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:24.662 true 00:27:24.662 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.662 16:33:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.197 true 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.197 true 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.197 true 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:27.197 true 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:27:27.197 16:33:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 305718 00:28:23.431 00:28:23.431 job0: (groupid=0, jobs=1): err= 0: pid=305800: Tue Nov 19 16:34:11 2024 00:28:23.431 read: IOPS=91, BW=368KiB/s (376kB/s)(21.5MiB/60010msec) 00:28:23.431 slat (nsec): min=4039, max=63795, avg=8072.03, stdev=6940.11 00:28:23.431 clat (usec): min=200, max=41066k, avg=10653.78, stdev=552990.99 00:28:23.431 lat (usec): min=204, max=41066k, avg=10661.85, stdev=552991.12 00:28:23.431 clat percentiles (usec): 00:28:23.431 | 1.00th=[ 212], 5.00th=[ 219], 10.00th=[ 223], 00:28:23.431 | 20.00th=[ 229], 30.00th=[ 233], 40.00th=[ 237], 00:28:23.431 | 50.00th=[ 239], 60.00th=[ 243], 70.00th=[ 249], 00:28:23.431 | 80.00th=[ 269], 90.00th=[ 371], 95.00th=[ 41157], 00:28:23.431 | 99.00th=[ 41681], 99.50th=[ 42206], 99.90th=[ 42206], 00:28:23.431 | 99.95th=[ 42206], 99.99th=[17112761] 00:28:23.431 write: IOPS=93, BW=375KiB/s (384kB/s)(22.0MiB/60010msec); 0 zone resets 00:28:23.431 slat (usec): min=5, max=28217, avg=13.54, stdev=375.92 00:28:23.431 clat (usec): min=157, max=498, avg=194.60, stdev=39.77 00:28:23.431 lat (usec): min=164, max=28498, avg=208.14, stdev=379.36 00:28:23.431 clat percentiles (usec): 00:28:23.431 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:28:23.431 | 30.00th=[ 180], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:28:23.431 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 223], 95.00th=[ 260], 00:28:23.431 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 424], 99.95th=[ 457], 00:28:23.431 | 99.99th=[ 498] 00:28:23.431 bw ( KiB/s): min= 2792, max= 9384, per=100.00%, avg=6436.57, stdev=2645.93, samples=7 00:28:23.431 iops : min= 698, max= 2346, avg=1609.14, stdev=661.48, samples=7 00:28:23.431 lat (usec) : 250=83.10%, 500=13.30% 00:28:23.431 lat (msec) : 50=3.59%, >=2000=0.01% 00:28:23.431 cpu : usr=0.10%, sys=0.15%, ctx=11150, majf=0, minf=1 00:28:23.431 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:23.431 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.431 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:23.431 issued rwts: total=5516,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:23.431 latency : target=0, window=0, percentile=100.00%, depth=1 00:28:23.431 00:28:23.431 Run status group 0 (all jobs): 00:28:23.431 READ: bw=368KiB/s (376kB/s), 368KiB/s-368KiB/s (376kB/s-376kB/s), io=21.5MiB (22.6MB), run=60010-60010msec 00:28:23.431 WRITE: bw=375KiB/s (384kB/s), 375KiB/s-375KiB/s (384kB/s-384kB/s), io=22.0MiB (23.1MB), run=60010-60010msec 00:28:23.431 00:28:23.431 Disk stats (read/write): 00:28:23.431 nvme0n1: ios=5565/5632, merge=0/0, ticks=18863/1067, in_queue=19930, util=99.86% 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:28:23.431 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:28:23.431 nvmf hotplug test: fio successful as expected 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:23.431 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:23.432 16:34:11 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:23.432 rmmod nvme_tcp 00:28:23.432 rmmod nvme_fabrics 00:28:23.432 rmmod nvme_keyring 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 305282 ']' 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 305282 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 305282 ']' 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 305282 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305282 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305282' 00:28:23.432 killing process with pid 305282 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 305282 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 305282 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:23.432 16:34:12 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:24.002 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:24.002 00:28:24.002 real 1m8.293s 00:28:24.002 user 4m10.674s 00:28:24.002 sys 0m6.714s 00:28:24.261 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:24.261 16:34:14 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:28:24.261 ************************************ 00:28:24.261 END TEST nvmf_initiator_timeout 00:28:24.261 ************************************ 00:28:24.261 16:34:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:28:24.261 16:34:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:28:24.261 16:34:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:28:24.261 16:34:14 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:28:24.261 16:34:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:26.168 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:26.168 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:26.168 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:26.168 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:26.168 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:26.169 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:26.169 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:28:26.169 16:34:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:26.169 16:34:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:26.169 16:34:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:26.169 16:34:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:26.428 ************************************ 00:28:26.428 START TEST nvmf_perf_adq 00:28:26.428 ************************************ 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:28:26.428 * Looking for test storage... 00:28:26.428 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:26.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.428 --rc genhtml_branch_coverage=1 00:28:26.428 --rc genhtml_function_coverage=1 00:28:26.428 --rc genhtml_legend=1 00:28:26.428 --rc geninfo_all_blocks=1 00:28:26.428 --rc geninfo_unexecuted_blocks=1 00:28:26.428 00:28:26.428 ' 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:26.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.428 --rc genhtml_branch_coverage=1 00:28:26.428 --rc genhtml_function_coverage=1 00:28:26.428 --rc genhtml_legend=1 00:28:26.428 --rc geninfo_all_blocks=1 00:28:26.428 --rc geninfo_unexecuted_blocks=1 00:28:26.428 00:28:26.428 ' 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:26.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.428 --rc genhtml_branch_coverage=1 00:28:26.428 --rc genhtml_function_coverage=1 00:28:26.428 --rc genhtml_legend=1 00:28:26.428 --rc geninfo_all_blocks=1 00:28:26.428 --rc geninfo_unexecuted_blocks=1 00:28:26.428 00:28:26.428 ' 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:26.428 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:26.428 --rc genhtml_branch_coverage=1 00:28:26.428 --rc genhtml_function_coverage=1 00:28:26.428 --rc genhtml_legend=1 00:28:26.428 --rc geninfo_all_blocks=1 00:28:26.428 --rc geninfo_unexecuted_blocks=1 00:28:26.428 00:28:26.428 ' 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:26.428 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:26.429 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:26.429 16:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.964 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:28.964 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:28.964 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:28.964 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:28.964 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:28.965 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:28.965 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:28.965 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:28.965 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:28.965 16:34:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:29.223 16:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:33.412 16:34:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:38.689 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:38.689 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:38.690 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:38.690 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:38.690 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:38.690 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:38.690 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:38.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:38.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:28:38.691 00:28:38.691 --- 10.0.0.2 ping statistics --- 00:28:38.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.691 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:38.691 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:38.691 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:28:38.691 00:28:38.691 --- 10.0.0.1 ping statistics --- 00:28:38.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:38.691 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=317584 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 317584 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 317584 ']' 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:38.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 [2024-11-19 16:34:28.481694] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:28:38.691 [2024-11-19 16:34:28.481787] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:38.691 [2024-11-19 16:34:28.554594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:38.691 [2024-11-19 16:34:28.601076] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:38.691 [2024-11-19 16:34:28.601144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:38.691 [2024-11-19 16:34:28.601173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:38.691 [2024-11-19 16:34:28.601184] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:38.691 [2024-11-19 16:34:28.601194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:38.691 [2024-11-19 16:34:28.602893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:38.691 [2024-11-19 16:34:28.603019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:38.691 [2024-11-19 16:34:28.603053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:38.691 [2024-11-19 16:34:28.603056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 [2024-11-19 16:34:28.909466] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 Malloc1 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:38.691 [2024-11-19 16:34:28.972039] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=317620 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:38.691 16:34:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:41.229 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:41.229 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.229 16:34:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:41.229 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.229 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:41.229 "tick_rate": 2700000000, 00:28:41.229 "poll_groups": [ 00:28:41.229 { 00:28:41.229 "name": "nvmf_tgt_poll_group_000", 00:28:41.229 "admin_qpairs": 1, 00:28:41.229 "io_qpairs": 1, 00:28:41.229 "current_admin_qpairs": 1, 00:28:41.229 "current_io_qpairs": 1, 00:28:41.229 "pending_bdev_io": 0, 00:28:41.229 "completed_nvme_io": 19420, 00:28:41.229 "transports": [ 00:28:41.229 { 00:28:41.229 "trtype": "TCP" 00:28:41.229 } 00:28:41.229 ] 00:28:41.229 }, 00:28:41.229 { 00:28:41.229 "name": "nvmf_tgt_poll_group_001", 00:28:41.229 "admin_qpairs": 0, 00:28:41.229 "io_qpairs": 1, 00:28:41.229 "current_admin_qpairs": 0, 00:28:41.229 "current_io_qpairs": 1, 00:28:41.229 "pending_bdev_io": 0, 00:28:41.229 "completed_nvme_io": 18903, 00:28:41.229 "transports": [ 00:28:41.229 { 00:28:41.229 "trtype": "TCP" 00:28:41.229 } 00:28:41.229 ] 00:28:41.229 }, 00:28:41.229 { 00:28:41.229 "name": "nvmf_tgt_poll_group_002", 00:28:41.229 "admin_qpairs": 0, 00:28:41.229 "io_qpairs": 1, 00:28:41.229 "current_admin_qpairs": 0, 00:28:41.229 "current_io_qpairs": 1, 00:28:41.229 "pending_bdev_io": 0, 00:28:41.229 "completed_nvme_io": 19786, 00:28:41.229 "transports": [ 00:28:41.229 { 00:28:41.229 "trtype": "TCP" 00:28:41.229 } 00:28:41.229 ] 00:28:41.229 }, 00:28:41.229 { 00:28:41.229 "name": "nvmf_tgt_poll_group_003", 00:28:41.229 "admin_qpairs": 0, 00:28:41.229 "io_qpairs": 1, 00:28:41.229 "current_admin_qpairs": 0, 00:28:41.229 "current_io_qpairs": 1, 00:28:41.229 "pending_bdev_io": 0, 00:28:41.229 "completed_nvme_io": 19756, 00:28:41.229 "transports": [ 00:28:41.229 { 00:28:41.229 "trtype": "TCP" 00:28:41.229 } 00:28:41.229 ] 00:28:41.229 } 00:28:41.229 ] 00:28:41.229 }' 00:28:41.229 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:41.229 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:41.229 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:41.229 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:41.229 16:34:31 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 317620 00:28:49.350 Initializing NVMe Controllers 00:28:49.350 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:49.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:49.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:49.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:49.350 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:49.350 Initialization complete. Launching workers. 00:28:49.350 ======================================================== 00:28:49.350 Latency(us) 00:28:49.350 Device Information : IOPS MiB/s Average min max 00:28:49.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10490.77 40.98 6100.87 2508.37 9986.81 00:28:49.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10081.87 39.38 6349.79 2395.45 10895.16 00:28:49.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10563.87 41.27 6059.30 2426.17 9887.06 00:28:49.350 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10340.67 40.39 6189.86 2497.83 10085.40 00:28:49.350 ======================================================== 00:28:49.350 Total : 41477.20 162.02 6172.97 2395.45 10895.16 00:28:49.350 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:49.350 rmmod nvme_tcp 00:28:49.350 rmmod nvme_fabrics 00:28:49.350 rmmod nvme_keyring 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 317584 ']' 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 317584 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 317584 ']' 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 317584 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 317584 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 317584' 00:28:49.350 killing process with pid 317584 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 317584 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 317584 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:49.350 16:34:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:51.257 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:51.257 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:51.257 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:51.257 16:34:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:52.195 16:34:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:54.729 16:34:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:29:00.005 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:29:00.005 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.005 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.005 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.005 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.005 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.005 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:00.006 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:00.006 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:00.006 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:00.006 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.006 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:29:00.006 00:29:00.006 --- 10.0.0.2 ping statistics --- 00:29:00.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.007 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.007 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.007 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:29:00.007 00:29:00.007 --- 10.0.0.1 ping statistics --- 00:29:00.007 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.007 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:29:00.007 net.core.busy_poll = 1 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:29:00.007 net.core.busy_read = 1 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=320235 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 320235 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 320235 ']' 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.007 16:34:49 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.007 [2024-11-19 16:34:49.920959] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:00.007 [2024-11-19 16:34:49.921040] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.007 [2024-11-19 16:34:49.993542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:00.007 [2024-11-19 16:34:50.044810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.007 [2024-11-19 16:34:50.044883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.007 [2024-11-19 16:34:50.044897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.007 [2024-11-19 16:34:50.044908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.007 [2024-11-19 16:34:50.044931] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.007 [2024-11-19 16:34:50.046531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.007 [2024-11-19 16:34:50.046640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.007 [2024-11-19 16:34:50.046719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.007 [2024-11-19 16:34:50.046722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.007 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.266 [2024-11-19 16:34:50.362468] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.266 Malloc1 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:00.266 [2024-11-19 16:34:50.427828] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=320387 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:29:00.266 16:34:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:02.171 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:29:02.171 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:02.171 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:02.171 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:02.171 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:29:02.171 "tick_rate": 2700000000, 00:29:02.171 "poll_groups": [ 00:29:02.171 { 00:29:02.171 "name": "nvmf_tgt_poll_group_000", 00:29:02.171 "admin_qpairs": 1, 00:29:02.171 "io_qpairs": 2, 00:29:02.171 "current_admin_qpairs": 1, 00:29:02.171 "current_io_qpairs": 2, 00:29:02.171 "pending_bdev_io": 0, 00:29:02.171 "completed_nvme_io": 25989, 00:29:02.171 "transports": [ 00:29:02.171 { 00:29:02.171 "trtype": "TCP" 00:29:02.171 } 00:29:02.171 ] 00:29:02.171 }, 00:29:02.171 { 00:29:02.171 "name": "nvmf_tgt_poll_group_001", 00:29:02.171 "admin_qpairs": 0, 00:29:02.171 "io_qpairs": 2, 00:29:02.171 "current_admin_qpairs": 0, 00:29:02.171 "current_io_qpairs": 2, 00:29:02.171 "pending_bdev_io": 0, 00:29:02.171 "completed_nvme_io": 25811, 00:29:02.171 "transports": [ 00:29:02.171 { 00:29:02.171 "trtype": "TCP" 00:29:02.171 } 00:29:02.171 ] 00:29:02.171 }, 00:29:02.171 { 00:29:02.171 "name": "nvmf_tgt_poll_group_002", 00:29:02.171 "admin_qpairs": 0, 00:29:02.171 "io_qpairs": 0, 00:29:02.171 "current_admin_qpairs": 0, 00:29:02.171 "current_io_qpairs": 0, 00:29:02.171 "pending_bdev_io": 0, 00:29:02.171 "completed_nvme_io": 0, 00:29:02.171 "transports": [ 00:29:02.171 { 00:29:02.171 "trtype": "TCP" 00:29:02.171 } 00:29:02.171 ] 00:29:02.171 }, 00:29:02.171 { 00:29:02.171 "name": "nvmf_tgt_poll_group_003", 00:29:02.171 "admin_qpairs": 0, 00:29:02.171 "io_qpairs": 0, 00:29:02.171 "current_admin_qpairs": 0, 00:29:02.171 "current_io_qpairs": 0, 00:29:02.171 "pending_bdev_io": 0, 00:29:02.171 "completed_nvme_io": 0, 00:29:02.171 "transports": [ 00:29:02.171 { 00:29:02.171 "trtype": "TCP" 00:29:02.171 } 00:29:02.171 ] 00:29:02.171 } 00:29:02.171 ] 00:29:02.171 }' 00:29:02.172 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:29:02.172 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:29:02.172 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:29:02.172 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:29:02.172 16:34:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 320387 00:29:10.302 Initializing NVMe Controllers 00:29:10.302 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:10.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:29:10.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:29:10.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:29:10.302 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:29:10.302 Initialization complete. Launching workers. 00:29:10.302 ======================================================== 00:29:10.302 Latency(us) 00:29:10.302 Device Information : IOPS MiB/s Average min max 00:29:10.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7255.97 28.34 8849.99 1757.30 54663.75 00:29:10.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 6278.69 24.53 10193.07 2050.97 54603.92 00:29:10.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6616.28 25.84 9708.86 1801.51 53412.61 00:29:10.302 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7026.48 27.45 9108.02 1336.86 53657.91 00:29:10.302 ======================================================== 00:29:10.302 Total : 27177.42 106.16 9436.08 1336.86 54663.75 00:29:10.302 00:29:10.302 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:29:10.302 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:10.302 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:29:10.302 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:10.302 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:29:10.302 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:10.302 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:10.302 rmmod nvme_tcp 00:29:10.302 rmmod nvme_fabrics 00:29:10.559 rmmod nvme_keyring 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 320235 ']' 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 320235 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 320235 ']' 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 320235 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 320235 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 320235' 00:29:10.559 killing process with pid 320235 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 320235 00:29:10.559 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 320235 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:10.817 16:35:00 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.108 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.108 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:29:14.108 00:29:14.108 real 0m47.435s 00:29:14.108 user 2m39.460s 00:29:14.108 sys 0m11.437s 00:29:14.108 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.108 16:35:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:29:14.108 ************************************ 00:29:14.108 END TEST nvmf_perf_adq 00:29:14.108 ************************************ 00:29:14.108 16:35:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:14.108 16:35:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:14.108 16:35:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.108 16:35:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:14.108 ************************************ 00:29:14.108 START TEST nvmf_shutdown 00:29:14.108 ************************************ 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:29:14.108 * Looking for test storage... 00:29:14.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:14.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.108 --rc genhtml_branch_coverage=1 00:29:14.108 --rc genhtml_function_coverage=1 00:29:14.108 --rc genhtml_legend=1 00:29:14.108 --rc geninfo_all_blocks=1 00:29:14.108 --rc geninfo_unexecuted_blocks=1 00:29:14.108 00:29:14.108 ' 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:14.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.108 --rc genhtml_branch_coverage=1 00:29:14.108 --rc genhtml_function_coverage=1 00:29:14.108 --rc genhtml_legend=1 00:29:14.108 --rc geninfo_all_blocks=1 00:29:14.108 --rc geninfo_unexecuted_blocks=1 00:29:14.108 00:29:14.108 ' 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:14.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.108 --rc genhtml_branch_coverage=1 00:29:14.108 --rc genhtml_function_coverage=1 00:29:14.108 --rc genhtml_legend=1 00:29:14.108 --rc geninfo_all_blocks=1 00:29:14.108 --rc geninfo_unexecuted_blocks=1 00:29:14.108 00:29:14.108 ' 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:14.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:14.108 --rc genhtml_branch_coverage=1 00:29:14.108 --rc genhtml_function_coverage=1 00:29:14.108 --rc genhtml_legend=1 00:29:14.108 --rc geninfo_all_blocks=1 00:29:14.108 --rc geninfo_unexecuted_blocks=1 00:29:14.108 00:29:14.108 ' 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:14.108 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:14.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:14.109 ************************************ 00:29:14.109 START TEST nvmf_shutdown_tc1 00:29:14.109 ************************************ 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.109 16:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:16.012 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:16.012 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:16.012 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:16.012 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:16.012 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:16.013 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:16.013 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:16.271 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:16.271 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:29:16.271 00:29:16.271 --- 10.0.0.2 ping statistics --- 00:29:16.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.271 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:16.271 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:16.271 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:29:16.271 00:29:16.271 --- 10.0.0.1 ping statistics --- 00:29:16.271 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:16.271 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=323687 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 323687 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 323687 ']' 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:16.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:16.271 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.271 [2024-11-19 16:35:06.442277] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:16.271 [2024-11-19 16:35:06.442350] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:16.271 [2024-11-19 16:35:06.512266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:16.271 [2024-11-19 16:35:06.557718] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:16.271 [2024-11-19 16:35:06.557766] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:16.271 [2024-11-19 16:35:06.557789] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:16.271 [2024-11-19 16:35:06.557799] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:16.271 [2024-11-19 16:35:06.557809] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:16.271 [2024-11-19 16:35:06.559521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:16.271 [2024-11-19 16:35:06.559585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:16.271 [2024-11-19 16:35:06.559652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:16.271 [2024-11-19 16:35:06.559655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.531 [2024-11-19 16:35:06.703924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:16.531 16:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:16.531 Malloc1 00:29:16.531 [2024-11-19 16:35:06.812150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:16.531 Malloc2 00:29:16.790 Malloc3 00:29:16.790 Malloc4 00:29:16.790 Malloc5 00:29:16.790 Malloc6 00:29:16.790 Malloc7 00:29:17.049 Malloc8 00:29:17.049 Malloc9 00:29:17.049 Malloc10 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=323855 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 323855 /var/tmp/bdevperf.sock 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 323855 ']' 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:17.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:17.049 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.049 { 00:29:17.049 "params": { 00:29:17.049 "name": "Nvme$subsystem", 00:29:17.049 "trtype": "$TEST_TRANSPORT", 00:29:17.049 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.049 "adrfam": "ipv4", 00:29:17.049 "trsvcid": "$NVMF_PORT", 00:29:17.049 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.049 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.049 "hdgst": ${hdgst:-false}, 00:29:17.049 "ddgst": ${ddgst:-false} 00:29:17.049 }, 00:29:17.049 "method": "bdev_nvme_attach_controller" 00:29:17.049 } 00:29:17.050 EOF 00:29:17.050 )") 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.050 { 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme$subsystem", 00:29:17.050 "trtype": "$TEST_TRANSPORT", 00:29:17.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "$NVMF_PORT", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.050 "hdgst": ${hdgst:-false}, 00:29:17.050 "ddgst": ${ddgst:-false} 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 } 00:29:17.050 EOF 00:29:17.050 )") 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.050 { 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme$subsystem", 00:29:17.050 "trtype": "$TEST_TRANSPORT", 00:29:17.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "$NVMF_PORT", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.050 "hdgst": ${hdgst:-false}, 00:29:17.050 "ddgst": ${ddgst:-false} 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 } 00:29:17.050 EOF 00:29:17.050 )") 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.050 { 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme$subsystem", 00:29:17.050 "trtype": "$TEST_TRANSPORT", 00:29:17.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "$NVMF_PORT", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.050 "hdgst": ${hdgst:-false}, 00:29:17.050 "ddgst": ${ddgst:-false} 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 } 00:29:17.050 EOF 00:29:17.050 )") 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.050 { 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme$subsystem", 00:29:17.050 "trtype": "$TEST_TRANSPORT", 00:29:17.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "$NVMF_PORT", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.050 "hdgst": ${hdgst:-false}, 00:29:17.050 "ddgst": ${ddgst:-false} 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 } 00:29:17.050 EOF 00:29:17.050 )") 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.050 { 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme$subsystem", 00:29:17.050 "trtype": "$TEST_TRANSPORT", 00:29:17.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "$NVMF_PORT", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.050 "hdgst": ${hdgst:-false}, 00:29:17.050 "ddgst": ${ddgst:-false} 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 } 00:29:17.050 EOF 00:29:17.050 )") 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.050 { 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme$subsystem", 00:29:17.050 "trtype": "$TEST_TRANSPORT", 00:29:17.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "$NVMF_PORT", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.050 "hdgst": ${hdgst:-false}, 00:29:17.050 "ddgst": ${ddgst:-false} 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 } 00:29:17.050 EOF 00:29:17.050 )") 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.050 { 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme$subsystem", 00:29:17.050 "trtype": "$TEST_TRANSPORT", 00:29:17.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "$NVMF_PORT", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.050 "hdgst": ${hdgst:-false}, 00:29:17.050 "ddgst": ${ddgst:-false} 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 } 00:29:17.050 EOF 00:29:17.050 )") 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.050 { 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme$subsystem", 00:29:17.050 "trtype": "$TEST_TRANSPORT", 00:29:17.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "$NVMF_PORT", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.050 "hdgst": ${hdgst:-false}, 00:29:17.050 "ddgst": ${ddgst:-false} 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 } 00:29:17.050 EOF 00:29:17.050 )") 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:17.050 { 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme$subsystem", 00:29:17.050 "trtype": "$TEST_TRANSPORT", 00:29:17.050 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "$NVMF_PORT", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:17.050 "hdgst": ${hdgst:-false}, 00:29:17.050 "ddgst": ${ddgst:-false} 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 } 00:29:17.050 EOF 00:29:17.050 )") 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:17.050 16:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme1", 00:29:17.050 "trtype": "tcp", 00:29:17.050 "traddr": "10.0.0.2", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "4420", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:17.050 "hdgst": false, 00:29:17.050 "ddgst": false 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 },{ 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme2", 00:29:17.050 "trtype": "tcp", 00:29:17.050 "traddr": "10.0.0.2", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "4420", 00:29:17.050 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:17.050 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:17.050 "hdgst": false, 00:29:17.050 "ddgst": false 00:29:17.050 }, 00:29:17.050 "method": "bdev_nvme_attach_controller" 00:29:17.050 },{ 00:29:17.050 "params": { 00:29:17.050 "name": "Nvme3", 00:29:17.050 "trtype": "tcp", 00:29:17.050 "traddr": "10.0.0.2", 00:29:17.050 "adrfam": "ipv4", 00:29:17.050 "trsvcid": "4420", 00:29:17.051 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:17.051 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:17.051 "hdgst": false, 00:29:17.051 "ddgst": false 00:29:17.051 }, 00:29:17.051 "method": "bdev_nvme_attach_controller" 00:29:17.051 },{ 00:29:17.051 "params": { 00:29:17.051 "name": "Nvme4", 00:29:17.051 "trtype": "tcp", 00:29:17.051 "traddr": "10.0.0.2", 00:29:17.051 "adrfam": "ipv4", 00:29:17.051 "trsvcid": "4420", 00:29:17.051 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:17.051 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:17.051 "hdgst": false, 00:29:17.051 "ddgst": false 00:29:17.051 }, 00:29:17.051 "method": "bdev_nvme_attach_controller" 00:29:17.051 },{ 00:29:17.051 "params": { 00:29:17.051 "name": "Nvme5", 00:29:17.051 "trtype": "tcp", 00:29:17.051 "traddr": "10.0.0.2", 00:29:17.051 "adrfam": "ipv4", 00:29:17.051 "trsvcid": "4420", 00:29:17.051 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:17.051 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:17.051 "hdgst": false, 00:29:17.051 "ddgst": false 00:29:17.051 }, 00:29:17.051 "method": "bdev_nvme_attach_controller" 00:29:17.051 },{ 00:29:17.051 "params": { 00:29:17.051 "name": "Nvme6", 00:29:17.051 "trtype": "tcp", 00:29:17.051 "traddr": "10.0.0.2", 00:29:17.051 "adrfam": "ipv4", 00:29:17.051 "trsvcid": "4420", 00:29:17.051 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:17.051 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:17.051 "hdgst": false, 00:29:17.051 "ddgst": false 00:29:17.051 }, 00:29:17.051 "method": "bdev_nvme_attach_controller" 00:29:17.051 },{ 00:29:17.051 "params": { 00:29:17.051 "name": "Nvme7", 00:29:17.051 "trtype": "tcp", 00:29:17.051 "traddr": "10.0.0.2", 00:29:17.051 "adrfam": "ipv4", 00:29:17.051 "trsvcid": "4420", 00:29:17.051 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:17.051 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:17.051 "hdgst": false, 00:29:17.051 "ddgst": false 00:29:17.051 }, 00:29:17.051 "method": "bdev_nvme_attach_controller" 00:29:17.051 },{ 00:29:17.051 "params": { 00:29:17.051 "name": "Nvme8", 00:29:17.051 "trtype": "tcp", 00:29:17.051 "traddr": "10.0.0.2", 00:29:17.051 "adrfam": "ipv4", 00:29:17.051 "trsvcid": "4420", 00:29:17.051 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:17.051 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:17.051 "hdgst": false, 00:29:17.051 "ddgst": false 00:29:17.051 }, 00:29:17.051 "method": "bdev_nvme_attach_controller" 00:29:17.051 },{ 00:29:17.051 "params": { 00:29:17.051 "name": "Nvme9", 00:29:17.051 "trtype": "tcp", 00:29:17.051 "traddr": "10.0.0.2", 00:29:17.051 "adrfam": "ipv4", 00:29:17.051 "trsvcid": "4420", 00:29:17.051 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:17.051 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:17.051 "hdgst": false, 00:29:17.051 "ddgst": false 00:29:17.051 }, 00:29:17.051 "method": "bdev_nvme_attach_controller" 00:29:17.051 },{ 00:29:17.051 "params": { 00:29:17.051 "name": "Nvme10", 00:29:17.051 "trtype": "tcp", 00:29:17.051 "traddr": "10.0.0.2", 00:29:17.051 "adrfam": "ipv4", 00:29:17.051 "trsvcid": "4420", 00:29:17.051 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:17.051 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:17.051 "hdgst": false, 00:29:17.051 "ddgst": false 00:29:17.051 }, 00:29:17.051 "method": "bdev_nvme_attach_controller" 00:29:17.051 }' 00:29:17.051 [2024-11-19 16:35:07.341591] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:17.051 [2024-11-19 16:35:07.341678] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:29:17.310 [2024-11-19 16:35:07.417256] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.311 [2024-11-19 16:35:07.464806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:19.248 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:19.248 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:29:19.248 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:19.248 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.248 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:19.248 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.248 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 323855 00:29:19.248 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:29:19.248 16:35:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:29:20.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 323855 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 323687 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.343 { 00:29:20.343 "params": { 00:29:20.343 "name": "Nvme$subsystem", 00:29:20.343 "trtype": "$TEST_TRANSPORT", 00:29:20.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.343 "adrfam": "ipv4", 00:29:20.343 "trsvcid": "$NVMF_PORT", 00:29:20.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.343 "hdgst": ${hdgst:-false}, 00:29:20.343 "ddgst": ${ddgst:-false} 00:29:20.343 }, 00:29:20.343 "method": "bdev_nvme_attach_controller" 00:29:20.343 } 00:29:20.343 EOF 00:29:20.343 )") 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.343 { 00:29:20.343 "params": { 00:29:20.343 "name": "Nvme$subsystem", 00:29:20.343 "trtype": "$TEST_TRANSPORT", 00:29:20.343 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.343 "adrfam": "ipv4", 00:29:20.343 "trsvcid": "$NVMF_PORT", 00:29:20.343 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.343 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.343 "hdgst": ${hdgst:-false}, 00:29:20.343 "ddgst": ${ddgst:-false} 00:29:20.343 }, 00:29:20.343 "method": "bdev_nvme_attach_controller" 00:29:20.343 } 00:29:20.343 EOF 00:29:20.343 )") 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.343 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.343 { 00:29:20.343 "params": { 00:29:20.343 "name": "Nvme$subsystem", 00:29:20.344 "trtype": "$TEST_TRANSPORT", 00:29:20.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "$NVMF_PORT", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.344 "hdgst": ${hdgst:-false}, 00:29:20.344 "ddgst": ${ddgst:-false} 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 } 00:29:20.344 EOF 00:29:20.344 )") 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.344 { 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme$subsystem", 00:29:20.344 "trtype": "$TEST_TRANSPORT", 00:29:20.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "$NVMF_PORT", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.344 "hdgst": ${hdgst:-false}, 00:29:20.344 "ddgst": ${ddgst:-false} 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 } 00:29:20.344 EOF 00:29:20.344 )") 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.344 { 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme$subsystem", 00:29:20.344 "trtype": "$TEST_TRANSPORT", 00:29:20.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "$NVMF_PORT", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.344 "hdgst": ${hdgst:-false}, 00:29:20.344 "ddgst": ${ddgst:-false} 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 } 00:29:20.344 EOF 00:29:20.344 )") 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.344 { 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme$subsystem", 00:29:20.344 "trtype": "$TEST_TRANSPORT", 00:29:20.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "$NVMF_PORT", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.344 "hdgst": ${hdgst:-false}, 00:29:20.344 "ddgst": ${ddgst:-false} 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 } 00:29:20.344 EOF 00:29:20.344 )") 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.344 { 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme$subsystem", 00:29:20.344 "trtype": "$TEST_TRANSPORT", 00:29:20.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "$NVMF_PORT", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.344 "hdgst": ${hdgst:-false}, 00:29:20.344 "ddgst": ${ddgst:-false} 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 } 00:29:20.344 EOF 00:29:20.344 )") 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.344 { 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme$subsystem", 00:29:20.344 "trtype": "$TEST_TRANSPORT", 00:29:20.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "$NVMF_PORT", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.344 "hdgst": ${hdgst:-false}, 00:29:20.344 "ddgst": ${ddgst:-false} 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 } 00:29:20.344 EOF 00:29:20.344 )") 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.344 { 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme$subsystem", 00:29:20.344 "trtype": "$TEST_TRANSPORT", 00:29:20.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "$NVMF_PORT", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.344 "hdgst": ${hdgst:-false}, 00:29:20.344 "ddgst": ${ddgst:-false} 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 } 00:29:20.344 EOF 00:29:20.344 )") 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:20.344 { 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme$subsystem", 00:29:20.344 "trtype": "$TEST_TRANSPORT", 00:29:20.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "$NVMF_PORT", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:20.344 "hdgst": ${hdgst:-false}, 00:29:20.344 "ddgst": ${ddgst:-false} 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 } 00:29:20.344 EOF 00:29:20.344 )") 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:29:20.344 16:35:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme1", 00:29:20.344 "trtype": "tcp", 00:29:20.344 "traddr": "10.0.0.2", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "4420", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:20.344 "hdgst": false, 00:29:20.344 "ddgst": false 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 },{ 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme2", 00:29:20.344 "trtype": "tcp", 00:29:20.344 "traddr": "10.0.0.2", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "4420", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:20.344 "hdgst": false, 00:29:20.344 "ddgst": false 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 },{ 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme3", 00:29:20.344 "trtype": "tcp", 00:29:20.344 "traddr": "10.0.0.2", 00:29:20.344 "adrfam": "ipv4", 00:29:20.344 "trsvcid": "4420", 00:29:20.344 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:20.344 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:20.344 "hdgst": false, 00:29:20.344 "ddgst": false 00:29:20.344 }, 00:29:20.344 "method": "bdev_nvme_attach_controller" 00:29:20.344 },{ 00:29:20.344 "params": { 00:29:20.344 "name": "Nvme4", 00:29:20.344 "trtype": "tcp", 00:29:20.344 "traddr": "10.0.0.2", 00:29:20.344 "adrfam": "ipv4", 00:29:20.345 "trsvcid": "4420", 00:29:20.345 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:20.345 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:20.345 "hdgst": false, 00:29:20.345 "ddgst": false 00:29:20.345 }, 00:29:20.345 "method": "bdev_nvme_attach_controller" 00:29:20.345 },{ 00:29:20.345 "params": { 00:29:20.345 "name": "Nvme5", 00:29:20.345 "trtype": "tcp", 00:29:20.345 "traddr": "10.0.0.2", 00:29:20.345 "adrfam": "ipv4", 00:29:20.345 "trsvcid": "4420", 00:29:20.345 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:20.345 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:20.345 "hdgst": false, 00:29:20.345 "ddgst": false 00:29:20.345 }, 00:29:20.345 "method": "bdev_nvme_attach_controller" 00:29:20.345 },{ 00:29:20.345 "params": { 00:29:20.345 "name": "Nvme6", 00:29:20.345 "trtype": "tcp", 00:29:20.345 "traddr": "10.0.0.2", 00:29:20.345 "adrfam": "ipv4", 00:29:20.345 "trsvcid": "4420", 00:29:20.345 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:20.345 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:20.345 "hdgst": false, 00:29:20.345 "ddgst": false 00:29:20.345 }, 00:29:20.345 "method": "bdev_nvme_attach_controller" 00:29:20.345 },{ 00:29:20.345 "params": { 00:29:20.345 "name": "Nvme7", 00:29:20.345 "trtype": "tcp", 00:29:20.345 "traddr": "10.0.0.2", 00:29:20.345 "adrfam": "ipv4", 00:29:20.345 "trsvcid": "4420", 00:29:20.345 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:20.345 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:20.345 "hdgst": false, 00:29:20.345 "ddgst": false 00:29:20.345 }, 00:29:20.345 "method": "bdev_nvme_attach_controller" 00:29:20.345 },{ 00:29:20.345 "params": { 00:29:20.345 "name": "Nvme8", 00:29:20.345 "trtype": "tcp", 00:29:20.345 "traddr": "10.0.0.2", 00:29:20.345 "adrfam": "ipv4", 00:29:20.345 "trsvcid": "4420", 00:29:20.345 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:20.345 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:20.345 "hdgst": false, 00:29:20.345 "ddgst": false 00:29:20.345 }, 00:29:20.345 "method": "bdev_nvme_attach_controller" 00:29:20.345 },{ 00:29:20.345 "params": { 00:29:20.345 "name": "Nvme9", 00:29:20.345 "trtype": "tcp", 00:29:20.345 "traddr": "10.0.0.2", 00:29:20.345 "adrfam": "ipv4", 00:29:20.345 "trsvcid": "4420", 00:29:20.345 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:20.345 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:20.345 "hdgst": false, 00:29:20.345 "ddgst": false 00:29:20.345 }, 00:29:20.345 "method": "bdev_nvme_attach_controller" 00:29:20.345 },{ 00:29:20.345 "params": { 00:29:20.345 "name": "Nvme10", 00:29:20.345 "trtype": "tcp", 00:29:20.345 "traddr": "10.0.0.2", 00:29:20.345 "adrfam": "ipv4", 00:29:20.345 "trsvcid": "4420", 00:29:20.345 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:20.345 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:20.345 "hdgst": false, 00:29:20.345 "ddgst": false 00:29:20.345 }, 00:29:20.345 "method": "bdev_nvme_attach_controller" 00:29:20.345 }' 00:29:20.345 [2024-11-19 16:35:10.398739] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:20.345 [2024-11-19 16:35:10.398829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid324169 ] 00:29:20.345 [2024-11-19 16:35:10.472610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.345 [2024-11-19 16:35:10.520609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:21.818 Running I/O for 1 seconds... 00:29:23.011 1805.00 IOPS, 112.81 MiB/s 00:29:23.011 Latency(us) 00:29:23.012 [2024-11-19T15:35:13.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.012 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.012 Verification LBA range: start 0x0 length 0x400 00:29:23.012 Nvme1n1 : 1.07 239.65 14.98 0.00 0.00 264092.82 19515.16 256318.58 00:29:23.012 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.012 Verification LBA range: start 0x0 length 0x400 00:29:23.012 Nvme2n1 : 1.09 234.81 14.68 0.00 0.00 265216.95 23592.96 262532.36 00:29:23.012 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.012 Verification LBA range: start 0x0 length 0x400 00:29:23.012 Nvme3n1 : 1.08 236.13 14.76 0.00 0.00 259085.65 18738.44 243891.01 00:29:23.012 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.012 Verification LBA range: start 0x0 length 0x400 00:29:23.012 Nvme4n1 : 1.08 241.46 15.09 0.00 0.00 248035.28 3519.53 243891.01 00:29:23.012 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.012 Verification LBA range: start 0x0 length 0x400 00:29:23.012 Nvme5n1 : 1.12 228.40 14.27 0.00 0.00 259203.41 20388.98 253211.69 00:29:23.012 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.012 Verification LBA range: start 0x0 length 0x400 00:29:23.012 Nvme6n1 : 1.13 226.49 14.16 0.00 0.00 256982.85 21165.70 251658.24 00:29:23.012 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.012 Verification LBA range: start 0x0 length 0x400 00:29:23.012 Nvme7n1 : 1.13 231.50 14.47 0.00 0.00 246055.13 4102.07 251658.24 00:29:23.012 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.012 Verification LBA range: start 0x0 length 0x400 00:29:23.012 Nvme8n1 : 1.19 269.45 16.84 0.00 0.00 209766.63 16214.09 251658.24 00:29:23.012 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.012 Verification LBA range: start 0x0 length 0x400 00:29:23.012 Nvme9n1 : 1.20 266.85 16.68 0.00 0.00 208306.63 9757.58 254765.13 00:29:23.012 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:23.012 Verification LBA range: start 0x0 length 0x400 00:29:23.012 Nvme10n1 : 1.20 267.54 16.72 0.00 0.00 204450.78 7427.41 276513.37 00:29:23.012 [2024-11-19T15:35:13.351Z] =================================================================================================================== 00:29:23.012 [2024-11-19T15:35:13.351Z] Total : 2442.27 152.64 0.00 0.00 239728.42 3519.53 276513.37 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:23.273 rmmod nvme_tcp 00:29:23.273 rmmod nvme_fabrics 00:29:23.273 rmmod nvme_keyring 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 323687 ']' 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 323687 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 323687 ']' 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 323687 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 323687 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 323687' 00:29:23.273 killing process with pid 323687 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 323687 00:29:23.273 16:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 323687 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:23.841 16:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.743 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:26.003 00:29:26.003 real 0m11.877s 00:29:26.003 user 0m34.709s 00:29:26.003 sys 0m3.182s 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:26.003 ************************************ 00:29:26.003 END TEST nvmf_shutdown_tc1 00:29:26.003 ************************************ 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:26.003 ************************************ 00:29:26.003 START TEST nvmf_shutdown_tc2 00:29:26.003 ************************************ 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:26.003 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:26.003 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.003 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:26.004 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:26.004 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:26.004 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:26.004 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:29:26.004 00:29:26.004 --- 10.0.0.2 ping statistics --- 00:29:26.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.004 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:26.004 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:26.004 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:29:26.004 00:29:26.004 --- 10.0.0.1 ping statistics --- 00:29:26.004 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:26.004 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=325053 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 325053 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 325053 ']' 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:26.004 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.263 [2024-11-19 16:35:16.356673] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:26.263 [2024-11-19 16:35:16.356745] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.263 [2024-11-19 16:35:16.425880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:26.263 [2024-11-19 16:35:16.472320] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.263 [2024-11-19 16:35:16.472391] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.263 [2024-11-19 16:35:16.472405] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.263 [2024-11-19 16:35:16.472427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.263 [2024-11-19 16:35:16.472436] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.263 [2024-11-19 16:35:16.473903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:26.263 [2024-11-19 16:35:16.474169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:26.264 [2024-11-19 16:35:16.474220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:26.264 [2024-11-19 16:35:16.474223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:26.264 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:26.264 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:26.264 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:26.264 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:26.264 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.522 [2024-11-19 16:35:16.610095] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:26.522 16:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:26.522 Malloc1 00:29:26.522 [2024-11-19 16:35:16.698923] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:26.522 Malloc2 00:29:26.522 Malloc3 00:29:26.522 Malloc4 00:29:26.781 Malloc5 00:29:26.781 Malloc6 00:29:26.781 Malloc7 00:29:26.781 Malloc8 00:29:26.781 Malloc9 00:29:26.781 Malloc10 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=325121 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 325121 /var/tmp/bdevperf.sock 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 325121 ']' 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:27.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.040 { 00:29:27.040 "params": { 00:29:27.040 "name": "Nvme$subsystem", 00:29:27.040 "trtype": "$TEST_TRANSPORT", 00:29:27.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.040 "adrfam": "ipv4", 00:29:27.040 "trsvcid": "$NVMF_PORT", 00:29:27.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.040 "hdgst": ${hdgst:-false}, 00:29:27.040 "ddgst": ${ddgst:-false} 00:29:27.040 }, 00:29:27.040 "method": "bdev_nvme_attach_controller" 00:29:27.040 } 00:29:27.040 EOF 00:29:27.040 )") 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.040 { 00:29:27.040 "params": { 00:29:27.040 "name": "Nvme$subsystem", 00:29:27.040 "trtype": "$TEST_TRANSPORT", 00:29:27.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.040 "adrfam": "ipv4", 00:29:27.040 "trsvcid": "$NVMF_PORT", 00:29:27.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.040 "hdgst": ${hdgst:-false}, 00:29:27.040 "ddgst": ${ddgst:-false} 00:29:27.040 }, 00:29:27.040 "method": "bdev_nvme_attach_controller" 00:29:27.040 } 00:29:27.040 EOF 00:29:27.040 )") 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.040 { 00:29:27.040 "params": { 00:29:27.040 "name": "Nvme$subsystem", 00:29:27.040 "trtype": "$TEST_TRANSPORT", 00:29:27.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.040 "adrfam": "ipv4", 00:29:27.040 "trsvcid": "$NVMF_PORT", 00:29:27.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.040 "hdgst": ${hdgst:-false}, 00:29:27.040 "ddgst": ${ddgst:-false} 00:29:27.040 }, 00:29:27.040 "method": "bdev_nvme_attach_controller" 00:29:27.040 } 00:29:27.040 EOF 00:29:27.040 )") 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.040 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.040 { 00:29:27.040 "params": { 00:29:27.040 "name": "Nvme$subsystem", 00:29:27.040 "trtype": "$TEST_TRANSPORT", 00:29:27.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.040 "adrfam": "ipv4", 00:29:27.040 "trsvcid": "$NVMF_PORT", 00:29:27.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.040 "hdgst": ${hdgst:-false}, 00:29:27.040 "ddgst": ${ddgst:-false} 00:29:27.040 }, 00:29:27.040 "method": "bdev_nvme_attach_controller" 00:29:27.040 } 00:29:27.040 EOF 00:29:27.040 )") 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.041 { 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme$subsystem", 00:29:27.041 "trtype": "$TEST_TRANSPORT", 00:29:27.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "$NVMF_PORT", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.041 "hdgst": ${hdgst:-false}, 00:29:27.041 "ddgst": ${ddgst:-false} 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 } 00:29:27.041 EOF 00:29:27.041 )") 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.041 { 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme$subsystem", 00:29:27.041 "trtype": "$TEST_TRANSPORT", 00:29:27.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "$NVMF_PORT", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.041 "hdgst": ${hdgst:-false}, 00:29:27.041 "ddgst": ${ddgst:-false} 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 } 00:29:27.041 EOF 00:29:27.041 )") 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.041 { 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme$subsystem", 00:29:27.041 "trtype": "$TEST_TRANSPORT", 00:29:27.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "$NVMF_PORT", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.041 "hdgst": ${hdgst:-false}, 00:29:27.041 "ddgst": ${ddgst:-false} 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 } 00:29:27.041 EOF 00:29:27.041 )") 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.041 { 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme$subsystem", 00:29:27.041 "trtype": "$TEST_TRANSPORT", 00:29:27.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "$NVMF_PORT", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.041 "hdgst": ${hdgst:-false}, 00:29:27.041 "ddgst": ${ddgst:-false} 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 } 00:29:27.041 EOF 00:29:27.041 )") 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.041 { 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme$subsystem", 00:29:27.041 "trtype": "$TEST_TRANSPORT", 00:29:27.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "$NVMF_PORT", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.041 "hdgst": ${hdgst:-false}, 00:29:27.041 "ddgst": ${ddgst:-false} 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 } 00:29:27.041 EOF 00:29:27.041 )") 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.041 { 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme$subsystem", 00:29:27.041 "trtype": "$TEST_TRANSPORT", 00:29:27.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "$NVMF_PORT", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.041 "hdgst": ${hdgst:-false}, 00:29:27.041 "ddgst": ${ddgst:-false} 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 } 00:29:27.041 EOF 00:29:27.041 )") 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:27.041 16:35:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme1", 00:29:27.041 "trtype": "tcp", 00:29:27.041 "traddr": "10.0.0.2", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "4420", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.041 "hdgst": false, 00:29:27.041 "ddgst": false 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 },{ 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme2", 00:29:27.041 "trtype": "tcp", 00:29:27.041 "traddr": "10.0.0.2", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "4420", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:27.041 "hdgst": false, 00:29:27.041 "ddgst": false 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 },{ 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme3", 00:29:27.041 "trtype": "tcp", 00:29:27.041 "traddr": "10.0.0.2", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "4420", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:27.041 "hdgst": false, 00:29:27.041 "ddgst": false 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 },{ 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme4", 00:29:27.041 "trtype": "tcp", 00:29:27.041 "traddr": "10.0.0.2", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "4420", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:27.041 "hdgst": false, 00:29:27.041 "ddgst": false 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 },{ 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme5", 00:29:27.041 "trtype": "tcp", 00:29:27.041 "traddr": "10.0.0.2", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "4420", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:27.041 "hdgst": false, 00:29:27.041 "ddgst": false 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 },{ 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme6", 00:29:27.041 "trtype": "tcp", 00:29:27.041 "traddr": "10.0.0.2", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "4420", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:27.041 "hdgst": false, 00:29:27.041 "ddgst": false 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 },{ 00:29:27.041 "params": { 00:29:27.041 "name": "Nvme7", 00:29:27.041 "trtype": "tcp", 00:29:27.041 "traddr": "10.0.0.2", 00:29:27.041 "adrfam": "ipv4", 00:29:27.041 "trsvcid": "4420", 00:29:27.041 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:27.041 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:27.041 "hdgst": false, 00:29:27.041 "ddgst": false 00:29:27.041 }, 00:29:27.041 "method": "bdev_nvme_attach_controller" 00:29:27.041 },{ 00:29:27.042 "params": { 00:29:27.042 "name": "Nvme8", 00:29:27.042 "trtype": "tcp", 00:29:27.042 "traddr": "10.0.0.2", 00:29:27.042 "adrfam": "ipv4", 00:29:27.042 "trsvcid": "4420", 00:29:27.042 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:27.042 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:27.042 "hdgst": false, 00:29:27.042 "ddgst": false 00:29:27.042 }, 00:29:27.042 "method": "bdev_nvme_attach_controller" 00:29:27.042 },{ 00:29:27.042 "params": { 00:29:27.042 "name": "Nvme9", 00:29:27.042 "trtype": "tcp", 00:29:27.042 "traddr": "10.0.0.2", 00:29:27.042 "adrfam": "ipv4", 00:29:27.042 "trsvcid": "4420", 00:29:27.042 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:27.042 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:27.042 "hdgst": false, 00:29:27.042 "ddgst": false 00:29:27.042 }, 00:29:27.042 "method": "bdev_nvme_attach_controller" 00:29:27.042 },{ 00:29:27.042 "params": { 00:29:27.042 "name": "Nvme10", 00:29:27.042 "trtype": "tcp", 00:29:27.042 "traddr": "10.0.0.2", 00:29:27.042 "adrfam": "ipv4", 00:29:27.042 "trsvcid": "4420", 00:29:27.042 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:27.042 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:27.042 "hdgst": false, 00:29:27.042 "ddgst": false 00:29:27.042 }, 00:29:27.042 "method": "bdev_nvme_attach_controller" 00:29:27.042 }' 00:29:27.042 [2024-11-19 16:35:17.210637] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:27.042 [2024-11-19 16:35:17.210726] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid325121 ] 00:29:27.042 [2024-11-19 16:35:17.282334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.042 [2024-11-19 16:35:17.329584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.938 Running I/O for 10 seconds... 00:29:28.938 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:28.938 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:28.938 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:28.938 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.938 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.938 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.938 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:28.938 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:28.938 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:28.939 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:29.196 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:29.196 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:29.196 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:29.196 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:29.196 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.196 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.455 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.455 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:29.455 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:29.455 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:29.713 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 325121 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 325121 ']' 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 325121 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 325121 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 325121' 00:29:29.714 killing process with pid 325121 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 325121 00:29:29.714 16:35:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 325121 00:29:29.714 1805.00 IOPS, 112.81 MiB/s [2024-11-19T15:35:20.053Z] Received shutdown signal, test time was about 1.099311 seconds 00:29:29.714 00:29:29.714 Latency(us) 00:29:29.714 [2024-11-19T15:35:20.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:29.714 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.714 Verification LBA range: start 0x0 length 0x400 00:29:29.714 Nvme1n1 : 1.09 234.44 14.65 0.00 0.00 269516.61 18835.53 260978.92 00:29:29.714 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.714 Verification LBA range: start 0x0 length 0x400 00:29:29.714 Nvme2n1 : 1.08 237.57 14.85 0.00 0.00 262056.58 22622.06 267192.70 00:29:29.714 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.714 Verification LBA range: start 0x0 length 0x400 00:29:29.714 Nvme3n1 : 1.05 242.83 15.18 0.00 0.00 250957.75 30292.20 237677.23 00:29:29.714 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.714 Verification LBA range: start 0x0 length 0x400 00:29:29.714 Nvme4n1 : 1.06 244.37 15.27 0.00 0.00 244733.55 3495.25 250104.79 00:29:29.714 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.714 Verification LBA range: start 0x0 length 0x400 00:29:29.714 Nvme5n1 : 1.08 240.58 15.04 0.00 0.00 243751.82 5024.43 239230.67 00:29:29.714 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.714 Verification LBA range: start 0x0 length 0x400 00:29:29.714 Nvme6n1 : 1.09 233.82 14.61 0.00 0.00 247884.99 22816.24 251658.24 00:29:29.714 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.714 Verification LBA range: start 0x0 length 0x400 00:29:29.714 Nvme7n1 : 1.09 235.85 14.74 0.00 0.00 241178.55 18447.17 256318.58 00:29:29.714 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.714 Verification LBA range: start 0x0 length 0x400 00:29:29.714 Nvme8n1 : 1.07 238.52 14.91 0.00 0.00 233585.21 16990.81 233016.89 00:29:29.714 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.714 Verification LBA range: start 0x0 length 0x400 00:29:29.714 Nvme9n1 : 1.10 233.05 14.57 0.00 0.00 234868.62 8592.50 257872.02 00:29:29.714 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:29.714 Verification LBA range: start 0x0 length 0x400 00:29:29.714 Nvme10n1 : 1.05 182.68 11.42 0.00 0.00 292538.34 25826.04 290494.39 00:29:29.714 [2024-11-19T15:35:20.053Z] =================================================================================================================== 00:29:29.714 [2024-11-19T15:35:20.053Z] Total : 2323.71 145.23 0.00 0.00 251048.69 3495.25 290494.39 00:29:29.972 16:35:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:30.908 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 325053 00:29:30.908 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:30.908 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:30.908 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:30.908 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:30.908 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:30.908 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:30.909 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:30.909 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:30.909 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:30.909 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:30.909 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:30.909 rmmod nvme_tcp 00:29:30.909 rmmod nvme_fabrics 00:29:30.909 rmmod nvme_keyring 00:29:30.909 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:31.168 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:31.168 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:31.168 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 325053 ']' 00:29:31.168 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 325053 00:29:31.168 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 325053 ']' 00:29:31.168 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 325053 00:29:31.169 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:31.169 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:31.169 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 325053 00:29:31.169 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:31.169 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:31.169 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 325053' 00:29:31.169 killing process with pid 325053 00:29:31.169 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 325053 00:29:31.169 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 325053 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.740 16:35:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:33.645 00:29:33.645 real 0m7.692s 00:29:33.645 user 0m23.616s 00:29:33.645 sys 0m1.517s 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:33.645 ************************************ 00:29:33.645 END TEST nvmf_shutdown_tc2 00:29:33.645 ************************************ 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:33.645 ************************************ 00:29:33.645 START TEST nvmf_shutdown_tc3 00:29:33.645 ************************************ 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:33.645 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:33.646 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:33.646 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:33.646 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:33.646 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:33.646 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:33.905 16:35:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:33.905 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:33.905 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:29:33.905 00:29:33.905 --- 10.0.0.2 ping statistics --- 00:29:33.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.905 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:33.905 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:33.905 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:29:33.905 00:29:33.905 --- 10.0.0.1 ping statistics --- 00:29:33.905 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:33.905 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=326110 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 326110 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 326110 ']' 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.905 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:33.905 [2024-11-19 16:35:24.238546] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:33.905 [2024-11-19 16:35:24.238621] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.164 [2024-11-19 16:35:24.308586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.164 [2024-11-19 16:35:24.353398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.164 [2024-11-19 16:35:24.353455] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.164 [2024-11-19 16:35:24.353478] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.164 [2024-11-19 16:35:24.353489] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.164 [2024-11-19 16:35:24.353498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.164 [2024-11-19 16:35:24.354946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.164 [2024-11-19 16:35:24.355053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.164 [2024-11-19 16:35:24.355147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:34.164 [2024-11-19 16:35:24.355151] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.164 [2024-11-19 16:35:24.489521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.164 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:34.424 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.424 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:34.424 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.424 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:34.424 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.424 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:34.424 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.424 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:34.424 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.424 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:34.425 16:35:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.425 Malloc1 00:29:34.425 [2024-11-19 16:35:24.580265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.425 Malloc2 00:29:34.425 Malloc3 00:29:34.425 Malloc4 00:29:34.425 Malloc5 00:29:34.685 Malloc6 00:29:34.685 Malloc7 00:29:34.685 Malloc8 00:29:34.685 Malloc9 00:29:34.685 Malloc10 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=326216 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 326216 /var/tmp/bdevperf.sock 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 326216 ']' 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:34.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:34.944 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.945 { 00:29:34.945 "params": { 00:29:34.945 "name": "Nvme$subsystem", 00:29:34.945 "trtype": "$TEST_TRANSPORT", 00:29:34.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.945 "adrfam": "ipv4", 00:29:34.945 "trsvcid": "$NVMF_PORT", 00:29:34.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.945 "hdgst": ${hdgst:-false}, 00:29:34.945 "ddgst": ${ddgst:-false} 00:29:34.945 }, 00:29:34.945 "method": "bdev_nvme_attach_controller" 00:29:34.945 } 00:29:34.945 EOF 00:29:34.945 )") 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.945 { 00:29:34.945 "params": { 00:29:34.945 "name": "Nvme$subsystem", 00:29:34.945 "trtype": "$TEST_TRANSPORT", 00:29:34.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.945 "adrfam": "ipv4", 00:29:34.945 "trsvcid": "$NVMF_PORT", 00:29:34.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.945 "hdgst": ${hdgst:-false}, 00:29:34.945 "ddgst": ${ddgst:-false} 00:29:34.945 }, 00:29:34.945 "method": "bdev_nvme_attach_controller" 00:29:34.945 } 00:29:34.945 EOF 00:29:34.945 )") 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.945 { 00:29:34.945 "params": { 00:29:34.945 "name": "Nvme$subsystem", 00:29:34.945 "trtype": "$TEST_TRANSPORT", 00:29:34.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.945 "adrfam": "ipv4", 00:29:34.945 "trsvcid": "$NVMF_PORT", 00:29:34.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.945 "hdgst": ${hdgst:-false}, 00:29:34.945 "ddgst": ${ddgst:-false} 00:29:34.945 }, 00:29:34.945 "method": "bdev_nvme_attach_controller" 00:29:34.945 } 00:29:34.945 EOF 00:29:34.945 )") 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.945 { 00:29:34.945 "params": { 00:29:34.945 "name": "Nvme$subsystem", 00:29:34.945 "trtype": "$TEST_TRANSPORT", 00:29:34.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.945 "adrfam": "ipv4", 00:29:34.945 "trsvcid": "$NVMF_PORT", 00:29:34.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.945 "hdgst": ${hdgst:-false}, 00:29:34.945 "ddgst": ${ddgst:-false} 00:29:34.945 }, 00:29:34.945 "method": "bdev_nvme_attach_controller" 00:29:34.945 } 00:29:34.945 EOF 00:29:34.945 )") 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.945 { 00:29:34.945 "params": { 00:29:34.945 "name": "Nvme$subsystem", 00:29:34.945 "trtype": "$TEST_TRANSPORT", 00:29:34.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.945 "adrfam": "ipv4", 00:29:34.945 "trsvcid": "$NVMF_PORT", 00:29:34.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.945 "hdgst": ${hdgst:-false}, 00:29:34.945 "ddgst": ${ddgst:-false} 00:29:34.945 }, 00:29:34.945 "method": "bdev_nvme_attach_controller" 00:29:34.945 } 00:29:34.945 EOF 00:29:34.945 )") 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.945 { 00:29:34.945 "params": { 00:29:34.945 "name": "Nvme$subsystem", 00:29:34.945 "trtype": "$TEST_TRANSPORT", 00:29:34.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.945 "adrfam": "ipv4", 00:29:34.945 "trsvcid": "$NVMF_PORT", 00:29:34.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.945 "hdgst": ${hdgst:-false}, 00:29:34.945 "ddgst": ${ddgst:-false} 00:29:34.945 }, 00:29:34.945 "method": "bdev_nvme_attach_controller" 00:29:34.945 } 00:29:34.945 EOF 00:29:34.945 )") 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.945 { 00:29:34.945 "params": { 00:29:34.945 "name": "Nvme$subsystem", 00:29:34.945 "trtype": "$TEST_TRANSPORT", 00:29:34.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.945 "adrfam": "ipv4", 00:29:34.945 "trsvcid": "$NVMF_PORT", 00:29:34.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.945 "hdgst": ${hdgst:-false}, 00:29:34.945 "ddgst": ${ddgst:-false} 00:29:34.945 }, 00:29:34.945 "method": "bdev_nvme_attach_controller" 00:29:34.945 } 00:29:34.945 EOF 00:29:34.945 )") 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.945 { 00:29:34.945 "params": { 00:29:34.945 "name": "Nvme$subsystem", 00:29:34.945 "trtype": "$TEST_TRANSPORT", 00:29:34.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.945 "adrfam": "ipv4", 00:29:34.945 "trsvcid": "$NVMF_PORT", 00:29:34.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.945 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.945 "hdgst": ${hdgst:-false}, 00:29:34.945 "ddgst": ${ddgst:-false} 00:29:34.945 }, 00:29:34.945 "method": "bdev_nvme_attach_controller" 00:29:34.945 } 00:29:34.945 EOF 00:29:34.945 )") 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.945 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.945 { 00:29:34.945 "params": { 00:29:34.945 "name": "Nvme$subsystem", 00:29:34.945 "trtype": "$TEST_TRANSPORT", 00:29:34.945 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.945 "adrfam": "ipv4", 00:29:34.945 "trsvcid": "$NVMF_PORT", 00:29:34.945 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.946 "hdgst": ${hdgst:-false}, 00:29:34.946 "ddgst": ${ddgst:-false} 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 } 00:29:34.946 EOF 00:29:34.946 )") 00:29:34.946 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.946 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:34.946 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:34.946 { 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme$subsystem", 00:29:34.946 "trtype": "$TEST_TRANSPORT", 00:29:34.946 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "$NVMF_PORT", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.946 "hdgst": ${hdgst:-false}, 00:29:34.946 "ddgst": ${ddgst:-false} 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 } 00:29:34.946 EOF 00:29:34.946 )") 00:29:34.946 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:34.946 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:34.946 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:34.946 16:35:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme1", 00:29:34.946 "trtype": "tcp", 00:29:34.946 "traddr": "10.0.0.2", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "4420", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:34.946 "hdgst": false, 00:29:34.946 "ddgst": false 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 },{ 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme2", 00:29:34.946 "trtype": "tcp", 00:29:34.946 "traddr": "10.0.0.2", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "4420", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:34.946 "hdgst": false, 00:29:34.946 "ddgst": false 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 },{ 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme3", 00:29:34.946 "trtype": "tcp", 00:29:34.946 "traddr": "10.0.0.2", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "4420", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:34.946 "hdgst": false, 00:29:34.946 "ddgst": false 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 },{ 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme4", 00:29:34.946 "trtype": "tcp", 00:29:34.946 "traddr": "10.0.0.2", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "4420", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:34.946 "hdgst": false, 00:29:34.946 "ddgst": false 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 },{ 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme5", 00:29:34.946 "trtype": "tcp", 00:29:34.946 "traddr": "10.0.0.2", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "4420", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:34.946 "hdgst": false, 00:29:34.946 "ddgst": false 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 },{ 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme6", 00:29:34.946 "trtype": "tcp", 00:29:34.946 "traddr": "10.0.0.2", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "4420", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:34.946 "hdgst": false, 00:29:34.946 "ddgst": false 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 },{ 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme7", 00:29:34.946 "trtype": "tcp", 00:29:34.946 "traddr": "10.0.0.2", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "4420", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:34.946 "hdgst": false, 00:29:34.946 "ddgst": false 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 },{ 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme8", 00:29:34.946 "trtype": "tcp", 00:29:34.946 "traddr": "10.0.0.2", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "4420", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:34.946 "hdgst": false, 00:29:34.946 "ddgst": false 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 },{ 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme9", 00:29:34.946 "trtype": "tcp", 00:29:34.946 "traddr": "10.0.0.2", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "4420", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:34.946 "hdgst": false, 00:29:34.946 "ddgst": false 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 },{ 00:29:34.946 "params": { 00:29:34.946 "name": "Nvme10", 00:29:34.946 "trtype": "tcp", 00:29:34.946 "traddr": "10.0.0.2", 00:29:34.946 "adrfam": "ipv4", 00:29:34.946 "trsvcid": "4420", 00:29:34.946 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:34.946 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:34.946 "hdgst": false, 00:29:34.946 "ddgst": false 00:29:34.946 }, 00:29:34.946 "method": "bdev_nvme_attach_controller" 00:29:34.946 }' 00:29:34.946 [2024-11-19 16:35:25.102558] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:34.946 [2024-11-19 16:35:25.102634] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid326216 ] 00:29:34.946 [2024-11-19 16:35:25.177608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.946 [2024-11-19 16:35:25.224527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:36.852 Running I/O for 10 seconds... 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:36.852 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:37.110 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:37.110 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:37.110 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:37.110 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:37.110 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:37.110 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 326110 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 326110 ']' 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 326110 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 326110 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 326110' 00:29:37.390 killing process with pid 326110 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 326110 00:29:37.390 16:35:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 326110 00:29:37.390 [2024-11-19 16:35:27.514760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.390 [2024-11-19 16:35:27.514838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.514989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515465] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.515630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf89c00 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.391 [2024-11-19 16:35:27.517215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.517840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c790 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.519541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.392 [2024-11-19 16:35:27.519583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.392 [2024-11-19 16:35:27.519603] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.392 [2024-11-19 16:35:27.519617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.392 [2024-11-19 16:35:27.519632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.392 [2024-11-19 16:35:27.519646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.392 [2024-11-19 16:35:27.519665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.392 [2024-11-19 16:35:27.519680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.392 [2024-11-19 16:35:27.519694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd930 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.519774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.392 [2024-11-19 16:35:27.519795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.392 [2024-11-19 16:35:27.519811] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.392 [2024-11-19 16:35:27.519824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.392 [2024-11-19 16:35:27.519839] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.392 [2024-11-19 16:35:27.519852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.392 [2024-11-19 16:35:27.519866] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.392 [2024-11-19 16:35:27.519879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.392 [2024-11-19 16:35:27.519891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73450 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.520226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.520253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.392 [2024-11-19 16:35:27.520266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520753] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.520937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a0d0 is same with the state(6) to be set 00:29:37.393 [2024-11-19 16:35:27.521120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.393 [2024-11-19 16:35:27.521147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.393 [2024-11-19 16:35:27.521174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.393 [2024-11-19 16:35:27.521191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.393 [2024-11-19 16:35:27.521207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.393 [2024-11-19 16:35:27.521222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.393 [2024-11-19 16:35:27.521237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.393 [2024-11-19 16:35:27.521252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.393 [2024-11-19 16:35:27.521267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.393 [2024-11-19 16:35:27.521281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.393 [2024-11-19 16:35:27.521296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.393 [2024-11-19 16:35:27.521311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.393 [2024-11-19 16:35:27.521326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.393 [2024-11-19 16:35:27.521340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.393 [2024-11-19 16:35:27.521356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.393 [2024-11-19 16:35:27.521380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.393 [2024-11-19 16:35:27.521396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.393 [2024-11-19 16:35:27.521416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.393 [2024-11-19 16:35:27.521431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.393 [2024-11-19 16:35:27.521448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.393 [2024-11-19 16:35:27.521463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.521980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.521993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.394 [2024-11-19 16:35:27.522457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.394 [2024-11-19 16:35:27.522473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:1[2024-11-19 16:35:27.522556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 he state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with t[2024-11-19 16:35:27.522629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:1he state(6) to be set 00:29:37.395 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with t[2024-11-19 16:35:27.522668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:1he state(6) to be set 00:29:37.395 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with t[2024-11-19 16:35:27.522731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:1he state(6) to be set 00:29:37.395 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522808] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.395 [2024-11-19 16:35:27.522975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.395 [2024-11-19 16:35:27.522981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.395 [2024-11-19 16:35:27.522989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.522996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.396 [2024-11-19 16:35:27.523001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.396 [2024-11-19 16:35:27.523014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.396 [2024-11-19 16:35:27.523039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.396 [2024-11-19 16:35:27.523051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.396 [2024-11-19 16:35:27.523063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.396 [2024-11-19 16:35:27.523087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:1[2024-11-19 16:35:27.523101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.396 he state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with t[2024-11-19 16:35:27.523126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:37.396 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.396 [2024-11-19 16:35:27.523139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.396 [2024-11-19 16:35:27.523151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.396 [2024-11-19 16:35:27.523169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:37.396 [2024-11-19 16:35:27.523205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523217] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523229] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.523393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8a5a0 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524810] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.396 [2024-11-19 16:35:27.524916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.524928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.524939] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.524952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.524967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.524980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.524991] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525202] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.525335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8aa90 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526700] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.526994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.527006] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.527018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.527029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.527041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.527053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.397 [2024-11-19 16:35:27.527064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527437] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.527459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b430 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528825] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528862] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.528994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.398 [2024-11-19 16:35:27.529184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529269] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.529466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8b900 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530759] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530945] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530957] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.530992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.531003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.531014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.531026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.531037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.531049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.399 [2024-11-19 16:35:27.531061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531205] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531287] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.531299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8bdf0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532218] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532264] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532472] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.400 [2024-11-19 16:35:27.532689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.532704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.532716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.532727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.532739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.532750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.532761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.532772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.532783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.532794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.532806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c2c0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.543078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ad950 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.543314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ae8b0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.543502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce9e0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.543654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cd930 (9): Bad file descriptor 00:29:37.401 [2024-11-19 16:35:27.543711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21af1a0 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.543877] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.543986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.401 [2024-11-19 16:35:27.543999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbbf50 is same with the state(6) to be set 00:29:37.401 [2024-11-19 16:35:27.544029] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73450 (9): Bad file descriptor 00:29:37.401 [2024-11-19 16:35:27.544100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.401 [2024-11-19 16:35:27.544121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71700 is same with the state(6) to be set 00:29:37.402 [2024-11-19 16:35:27.544269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544362] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7b0b0 is same with the state(6) to be set 00:29:37.402 [2024-11-19 16:35:27.544427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:37.402 [2024-11-19 16:35:27.544537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.544550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6d5b0 is same with the state(6) to be set 00:29:37.402 [2024-11-19 16:35:27.545950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.545976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.402 [2024-11-19 16:35:27.546743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.402 [2024-11-19 16:35:27.546757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.546773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.546787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.546803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.546817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.546833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.546848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.546863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.546877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.546893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.546907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.546923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.546937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.546953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.546967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.546983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.546997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.403 [2024-11-19 16:35:27.547797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.403 [2024-11-19 16:35:27.547813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.547827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.547843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.547858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.547877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.547893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.547909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.547924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.547940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.547955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.547970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.547985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.548001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.548016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.548031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217b910 is same with the state(6) to be set 00:29:37.404 [2024-11-19 16:35:27.548497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:37.404 [2024-11-19 16:35:27.548543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7b0b0 (9): Bad file descriptor 00:29:37.404 [2024-11-19 16:35:27.550148] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:37.404 [2024-11-19 16:35:27.550190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:37.404 [2024-11-19 16:35:27.550217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ad950 (9): Bad file descriptor 00:29:37.404 [2024-11-19 16:35:27.550298] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:37.404 [2024-11-19 16:35:27.551100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.404 [2024-11-19 16:35:27.551144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7b0b0 with addr=10.0.0.2, port=4420 00:29:37.404 [2024-11-19 16:35:27.551162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7b0b0 is same with the state(6) to be set 00:29:37.404 [2024-11-19 16:35:27.551662] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:37.404 [2024-11-19 16:35:27.551721] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:37.404 [2024-11-19 16:35:27.551773] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:37.404 [2024-11-19 16:35:27.551839] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:37.404 [2024-11-19 16:35:27.551947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.404 [2024-11-19 16:35:27.551975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ad950 with addr=10.0.0.2, port=4420 00:29:37.404 [2024-11-19 16:35:27.551992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ad950 is same with the state(6) to be set 00:29:37.404 [2024-11-19 16:35:27.552011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7b0b0 (9): Bad file descriptor 00:29:37.404 [2024-11-19 16:35:27.552104] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:37.404 [2024-11-19 16:35:27.552175] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:37.404 [2024-11-19 16:35:27.552277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ad950 (9): Bad file descriptor 00:29:37.404 [2024-11-19 16:35:27.552302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:37.404 [2024-11-19 16:35:27.552316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:37.404 [2024-11-19 16:35:27.552332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:37.404 [2024-11-19 16:35:27.552347] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:37.404 [2024-11-19 16:35:27.552441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:37.404 [2024-11-19 16:35:27.552461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:37.404 [2024-11-19 16:35:27.552474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:37.404 [2024-11-19 16:35:27.552487] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:37.404 [2024-11-19 16:35:27.552991] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ae8b0 (9): Bad file descriptor 00:29:37.404 [2024-11-19 16:35:27.553032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ce9e0 (9): Bad file descriptor 00:29:37.404 [2024-11-19 16:35:27.553085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21af1a0 (9): Bad file descriptor 00:29:37.404 [2024-11-19 16:35:27.553130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbbf50 (9): Bad file descriptor 00:29:37.404 [2024-11-19 16:35:27.553170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d71700 (9): Bad file descriptor 00:29:37.404 [2024-11-19 16:35:27.553205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6d5b0 (9): Bad file descriptor 00:29:37.404 [2024-11-19 16:35:27.553362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.553413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.553458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.553488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.553519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.553548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.553584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.553615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.553645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.553675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.404 [2024-11-19 16:35:27.553706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.404 [2024-11-19 16:35:27.553720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.553735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.553749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.553766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.553780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.553796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.553810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.553825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.553840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.553855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.553870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.553885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.553900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.553916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.553930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.553945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.553963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.553979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.553994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.405 [2024-11-19 16:35:27.554852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.405 [2024-11-19 16:35:27.554866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.554881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.554896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.554912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.554927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.554943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.554957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.554973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.554987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.555409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.555424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d75cc0 is same with the state(6) to be set 00:29:37.406 [2024-11-19 16:35:27.556736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.556760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.556780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.556796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.556813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.556827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.556843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.556857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.556878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.556893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.556909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.556923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.556939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.556953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.556969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.556983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.556999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.406 [2024-11-19 16:35:27.557013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.406 [2024-11-19 16:35:27.557029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.557984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.557999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.558013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.558029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.558047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.558063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.407 [2024-11-19 16:35:27.558085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.407 [2024-11-19 16:35:27.558102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.408 [2024-11-19 16:35:27.558720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.408 [2024-11-19 16:35:27.558735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23ca380 is same with the state(6) to be set 00:29:37.408 [2024-11-19 16:35:27.559954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:37.408 [2024-11-19 16:35:27.559984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:37.408 [2024-11-19 16:35:27.560295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.408 [2024-11-19 16:35:27.560325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d73450 with addr=10.0.0.2, port=4420 00:29:37.408 [2024-11-19 16:35:27.560343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73450 is same with the state(6) to be set 00:29:37.408 [2024-11-19 16:35:27.560463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.408 [2024-11-19 16:35:27.560489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21cd930 with addr=10.0.0.2, port=4420 00:29:37.408 [2024-11-19 16:35:27.560505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd930 is same with the state(6) to be set 00:29:37.408 [2024-11-19 16:35:27.561090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73450 (9): Bad file descriptor 00:29:37.408 [2024-11-19 16:35:27.561127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cd930 (9): Bad file descriptor 00:29:37.408 [2024-11-19 16:35:27.561223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:37.408 [2024-11-19 16:35:27.561244] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:37.408 [2024-11-19 16:35:27.561261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:37.408 [2024-11-19 16:35:27.561277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:37.408 [2024-11-19 16:35:27.561293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:37.408 [2024-11-19 16:35:27.561305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:37.408 [2024-11-19 16:35:27.561318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:37.408 [2024-11-19 16:35:27.561330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:37.408 [2024-11-19 16:35:27.561426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:37.408 [2024-11-19 16:35:27.561501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:37.408 [2024-11-19 16:35:27.561622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.408 [2024-11-19 16:35:27.561650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7b0b0 with addr=10.0.0.2, port=4420 00:29:37.408 [2024-11-19 16:35:27.561666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7b0b0 is same with the state(6) to be set 00:29:37.408 [2024-11-19 16:35:27.561828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.408 [2024-11-19 16:35:27.561855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ad950 with addr=10.0.0.2, port=4420 00:29:37.408 [2024-11-19 16:35:27.561871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ad950 is same with the state(6) to be set 00:29:37.408 [2024-11-19 16:35:27.561889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7b0b0 (9): Bad file descriptor 00:29:37.408 [2024-11-19 16:35:27.561943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ad950 (9): Bad file descriptor 00:29:37.408 [2024-11-19 16:35:27.561963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:37.408 [2024-11-19 16:35:27.561976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:37.409 [2024-11-19 16:35:27.561989] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:37.409 [2024-11-19 16:35:27.562001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:37.409 [2024-11-19 16:35:27.562050] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:37.409 [2024-11-19 16:35:27.562065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:37.409 [2024-11-19 16:35:27.562094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:37.409 [2024-11-19 16:35:27.562107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:37.409 [2024-11-19 16:35:27.563177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.563977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.563992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.564007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.564023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.564037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.564053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.564075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.564093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.564108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.564127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.564142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.564157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.409 [2024-11-19 16:35:27.564171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.409 [2024-11-19 16:35:27.564188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.564969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.564984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.565000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.565014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.565030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.565045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.565060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.565083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.565101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.565121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.565141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.565156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.565172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.565187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.565202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e8930 is same with the state(6) to be set 00:29:37.410 [2024-11-19 16:35:27.566515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.566539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.566561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.566577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.566593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.566608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.566624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.410 [2024-11-19 16:35:27.566640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.410 [2024-11-19 16:35:27.566656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.566982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.566998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.411 [2024-11-19 16:35:27.567523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.411 [2024-11-19 16:35:27.567537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.567971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.567987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.568548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.568570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217a4b0 is same with the state(6) to be set 00:29:37.412 [2024-11-19 16:35:27.569838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.569862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.569883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.569898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.569914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.412 [2024-11-19 16:35:27.569929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.412 [2024-11-19 16:35:27.569945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.569960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.569975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.569990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.413 [2024-11-19 16:35:27.570972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.413 [2024-11-19 16:35:27.570988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.571834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.571848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ce40 is same with the state(6) to be set 00:29:37.414 [2024-11-19 16:35:27.573102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.573137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.573159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.573174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.573190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.573205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.573221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.414 [2024-11-19 16:35:27.573236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.414 [2024-11-19 16:35:27.573251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.573972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.573988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.574002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.574017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.574031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.574047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.574060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.574085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.574104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.574129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.574143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.574158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.574173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.574188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.574203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.574218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.415 [2024-11-19 16:35:27.574232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.415 [2024-11-19 16:35:27.574247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.574973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.574987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.575002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.575016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.575032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.575046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.575079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.575095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.575110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217e370 is same with the state(6) to be set 00:29:37.416 [2024-11-19 16:35:27.576351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.576381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.576402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.576418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.416 [2024-11-19 16:35:27.576434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.416 [2024-11-19 16:35:27.576448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.576970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.576985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.417 [2024-11-19 16:35:27.577391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.417 [2024-11-19 16:35:27.577407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.577982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.577997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.578363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.578377] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x30c0d60 is same with the state(6) to be set 00:29:37.418 [2024-11-19 16:35:27.579628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.579651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.579672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.418 [2024-11-19 16:35:27.579687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.418 [2024-11-19 16:35:27.579703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.579718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.579739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.579753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.579769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.579784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.579800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.579814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.579830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.579844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.579860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.579875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.579890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.579905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.579921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.579935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.579951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.579965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.579980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.579994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.419 [2024-11-19 16:35:27.580795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.419 [2024-11-19 16:35:27.580810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.580824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.580840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.580855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.580870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.580895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.580912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.580927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.580943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.580957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.580973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.580988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.420 [2024-11-19 16:35:27.581634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:37.420 [2024-11-19 16:35:27.581649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23c8eb0 is same with the state(6) to be set 00:29:37.420 [2024-11-19 16:35:27.584340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:37.420 [2024-11-19 16:35:27.584386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:37.420 [2024-11-19 16:35:27.584406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:37.420 [2024-11-19 16:35:27.584429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:37.420 [2024-11-19 16:35:27.584552] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:37.420 [2024-11-19 16:35:27.584578] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:29:37.420 [2024-11-19 16:35:27.584683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:37.420 task offset: 16384 on job bdev=Nvme2n1 fails 00:29:37.420 00:29:37.420 Latency(us) 00:29:37.420 [2024-11-19T15:35:27.759Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.420 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.420 Job: Nvme1n1 ended in about 0.66 seconds with error 00:29:37.420 Verification LBA range: start 0x0 length 0x400 00:29:37.420 Nvme1n1 : 0.66 195.06 12.19 97.53 0.00 215371.98 23884.23 242337.56 00:29:37.420 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.420 Job: Nvme2n1 ended in about 0.65 seconds with error 00:29:37.420 Verification LBA range: start 0x0 length 0x400 00:29:37.420 Nvme2n1 : 0.65 198.32 12.40 99.16 0.00 205588.86 26214.40 222142.77 00:29:37.420 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.420 Job: Nvme3n1 ended in about 0.67 seconds with error 00:29:37.420 Verification LBA range: start 0x0 length 0x400 00:29:37.421 Nvme3n1 : 0.67 192.19 12.01 96.10 0.00 206367.60 28932.93 234570.33 00:29:37.421 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.421 Job: Nvme4n1 ended in about 0.67 seconds with error 00:29:37.421 Verification LBA range: start 0x0 length 0x400 00:29:37.421 Nvme4n1 : 0.67 191.24 11.95 95.62 0.00 201400.32 17864.63 246997.90 00:29:37.421 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.421 Job: Nvme5n1 ended in about 0.65 seconds with error 00:29:37.421 Verification LBA range: start 0x0 length 0x400 00:29:37.421 Nvme5n1 : 0.65 197.07 12.32 98.53 0.00 188740.39 6650.69 246997.90 00:29:37.421 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.421 Job: Nvme6n1 ended in about 0.67 seconds with error 00:29:37.421 Verification LBA range: start 0x0 length 0x400 00:29:37.421 Nvme6n1 : 0.67 95.15 5.95 95.15 0.00 285660.35 22136.60 240784.12 00:29:37.421 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.421 Job: Nvme7n1 ended in about 0.68 seconds with error 00:29:37.421 Verification LBA range: start 0x0 length 0x400 00:29:37.421 Nvme7n1 : 0.68 94.70 5.92 94.70 0.00 278103.99 16990.81 254765.13 00:29:37.421 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.421 Job: Nvme8n1 ended in about 0.68 seconds with error 00:29:37.421 Verification LBA range: start 0x0 length 0x400 00:29:37.421 Nvme8n1 : 0.68 94.24 5.89 94.24 0.00 270640.17 30098.01 259425.47 00:29:37.421 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.421 Job: Nvme9n1 ended in about 0.68 seconds with error 00:29:37.421 Verification LBA range: start 0x0 length 0x400 00:29:37.421 Nvme9n1 : 0.68 93.79 5.86 93.79 0.00 263394.80 37476.88 259425.47 00:29:37.421 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:37.421 Job: Nvme10n1 ended in about 0.66 seconds with error 00:29:37.421 Verification LBA range: start 0x0 length 0x400 00:29:37.421 Nvme10n1 : 0.66 97.04 6.07 97.04 0.00 243353.22 31068.92 276513.37 00:29:37.421 [2024-11-19T15:35:27.760Z] =================================================================================================================== 00:29:37.421 [2024-11-19T15:35:27.760Z] Total : 1448.81 90.55 961.87 0.00 229388.50 6650.69 276513.37 00:29:37.421 [2024-11-19 16:35:27.614502] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:37.421 [2024-11-19 16:35:27.614622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:37.421 [2024-11-19 16:35:27.614969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.421 [2024-11-19 16:35:27.615010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6d5b0 with addr=10.0.0.2, port=4420 00:29:37.421 [2024-11-19 16:35:27.615031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6d5b0 is same with the state(6) to be set 00:29:37.421 [2024-11-19 16:35:27.615140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.421 [2024-11-19 16:35:27.615168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d71700 with addr=10.0.0.2, port=4420 00:29:37.421 [2024-11-19 16:35:27.615184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71700 is same with the state(6) to be set 00:29:37.421 [2024-11-19 16:35:27.615271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.421 [2024-11-19 16:35:27.615297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbbf50 with addr=10.0.0.2, port=4420 00:29:37.421 [2024-11-19 16:35:27.615313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbbf50 is same with the state(6) to be set 00:29:37.421 [2024-11-19 16:35:27.615401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.421 [2024-11-19 16:35:27.615427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21af1a0 with addr=10.0.0.2, port=4420 00:29:37.421 [2024-11-19 16:35:27.615443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21af1a0 is same with the state(6) to be set 00:29:37.421 [2024-11-19 16:35:27.617093] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:37.421 [2024-11-19 16:35:27.617123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:37.421 [2024-11-19 16:35:27.617142] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:37.421 [2024-11-19 16:35:27.617159] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:37.421 [2024-11-19 16:35:27.617312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.421 [2024-11-19 16:35:27.617341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ae8b0 with addr=10.0.0.2, port=4420 00:29:37.421 [2024-11-19 16:35:27.617358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ae8b0 is same with the state(6) to be set 00:29:37.421 [2024-11-19 16:35:27.617435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.421 [2024-11-19 16:35:27.617461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ce9e0 with addr=10.0.0.2, port=4420 00:29:37.421 [2024-11-19 16:35:27.617478] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ce9e0 is same with the state(6) to be set 00:29:37.421 [2024-11-19 16:35:27.617502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6d5b0 (9): Bad file descriptor 00:29:37.421 [2024-11-19 16:35:27.617526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d71700 (9): Bad file descriptor 00:29:37.421 [2024-11-19 16:35:27.617544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbbf50 (9): Bad file descriptor 00:29:37.421 [2024-11-19 16:35:27.617562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21af1a0 (9): Bad file descriptor 00:29:37.421 [2024-11-19 16:35:27.617634] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:29:37.421 [2024-11-19 16:35:27.617659] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:29:37.421 [2024-11-19 16:35:27.617685] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:29:37.421 [2024-11-19 16:35:27.617708] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:29:37.421 [2024-11-19 16:35:27.618141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.421 [2024-11-19 16:35:27.618170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21cd930 with addr=10.0.0.2, port=4420 00:29:37.421 [2024-11-19 16:35:27.618187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd930 is same with the state(6) to be set 00:29:37.421 [2024-11-19 16:35:27.618263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.421 [2024-11-19 16:35:27.618289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d73450 with addr=10.0.0.2, port=4420 00:29:37.421 [2024-11-19 16:35:27.618305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d73450 is same with the state(6) to be set 00:29:37.421 [2024-11-19 16:35:27.618382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.421 [2024-11-19 16:35:27.618408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7b0b0 with addr=10.0.0.2, port=4420 00:29:37.421 [2024-11-19 16:35:27.618424] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7b0b0 is same with the state(6) to be set 00:29:37.421 [2024-11-19 16:35:27.618499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:37.421 [2024-11-19 16:35:27.618525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21ad950 with addr=10.0.0.2, port=4420 00:29:37.421 [2024-11-19 16:35:27.618541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21ad950 is same with the state(6) to be set 00:29:37.421 [2024-11-19 16:35:27.618559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ae8b0 (9): Bad file descriptor 00:29:37.421 [2024-11-19 16:35:27.618577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ce9e0 (9): Bad file descriptor 00:29:37.421 [2024-11-19 16:35:27.618594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:37.421 [2024-11-19 16:35:27.618607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:37.421 [2024-11-19 16:35:27.618623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:37.421 [2024-11-19 16:35:27.618639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:37.421 [2024-11-19 16:35:27.618655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:37.422 [2024-11-19 16:35:27.618667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:37.422 [2024-11-19 16:35:27.618680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:37.422 [2024-11-19 16:35:27.618693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:37.422 [2024-11-19 16:35:27.618707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:37.422 [2024-11-19 16:35:27.618719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:37.422 [2024-11-19 16:35:27.618731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:37.422 [2024-11-19 16:35:27.618743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:37.422 [2024-11-19 16:35:27.618756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:37.422 [2024-11-19 16:35:27.618774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:37.422 [2024-11-19 16:35:27.618787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:37.422 [2024-11-19 16:35:27.618799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:37.422 [2024-11-19 16:35:27.618918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cd930 (9): Bad file descriptor 00:29:37.422 [2024-11-19 16:35:27.618944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d73450 (9): Bad file descriptor 00:29:37.422 [2024-11-19 16:35:27.618962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7b0b0 (9): Bad file descriptor 00:29:37.422 [2024-11-19 16:35:27.618979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21ad950 (9): Bad file descriptor 00:29:37.422 [2024-11-19 16:35:27.618994] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:37.422 [2024-11-19 16:35:27.619007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:37.422 [2024-11-19 16:35:27.619020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:37.422 [2024-11-19 16:35:27.619033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:37.422 [2024-11-19 16:35:27.619048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:37.422 [2024-11-19 16:35:27.619061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:37.422 [2024-11-19 16:35:27.619086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:37.422 [2024-11-19 16:35:27.619101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:37.422 [2024-11-19 16:35:27.619150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:37.422 [2024-11-19 16:35:27.619167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:37.422 [2024-11-19 16:35:27.619180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:37.422 [2024-11-19 16:35:27.619193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:37.422 [2024-11-19 16:35:27.619207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:37.422 [2024-11-19 16:35:27.619220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:37.422 [2024-11-19 16:35:27.619233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:37.422 [2024-11-19 16:35:27.619246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:37.422 [2024-11-19 16:35:27.619259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:37.422 [2024-11-19 16:35:27.619271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:37.422 [2024-11-19 16:35:27.619284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:37.422 [2024-11-19 16:35:27.619296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:37.422 [2024-11-19 16:35:27.619309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:37.422 [2024-11-19 16:35:27.619323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:37.422 [2024-11-19 16:35:27.619341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:37.422 [2024-11-19 16:35:27.619354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:37.680 16:35:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 326216 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 326216 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 326216 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:39.069 rmmod nvme_tcp 00:29:39.069 rmmod nvme_fabrics 00:29:39.069 rmmod nvme_keyring 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 326110 ']' 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 326110 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 326110 ']' 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 326110 00:29:39.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (326110) - No such process 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 326110 is not found' 00:29:39.069 Process with pid 326110 is not found 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:39.069 16:35:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:40.977 00:29:40.977 real 0m7.248s 00:29:40.977 user 0m17.009s 00:29:40.977 sys 0m1.301s 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:40.977 ************************************ 00:29:40.977 END TEST nvmf_shutdown_tc3 00:29:40.977 ************************************ 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:40.977 ************************************ 00:29:40.977 START TEST nvmf_shutdown_tc4 00:29:40.977 ************************************ 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:40.977 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:40.978 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:40.978 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:40.978 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:40.978 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:40.978 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:41.237 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.237 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:29:41.237 00:29:41.237 --- 10.0.0.2 ping statistics --- 00:29:41.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.237 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.237 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.237 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:29:41.237 00:29:41.237 --- 10.0.0.1 ping statistics --- 00:29:41.237 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.237 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=327111 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 327111 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 327111 ']' 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.237 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:41.237 [2024-11-19 16:35:31.418237] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:41.237 [2024-11-19 16:35:31.418332] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.237 [2024-11-19 16:35:31.492407] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:41.237 [2024-11-19 16:35:31.541407] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:41.237 [2024-11-19 16:35:31.541474] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:41.237 [2024-11-19 16:35:31.541502] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:41.237 [2024-11-19 16:35:31.541513] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:41.237 [2024-11-19 16:35:31.541521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:41.238 [2024-11-19 16:35:31.543008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:41.238 [2024-11-19 16:35:31.543088] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:41.238 [2024-11-19 16:35:31.543145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:41.238 [2024-11-19 16:35:31.543149] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:41.496 [2024-11-19 16:35:31.680513] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.496 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:41.497 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:41.497 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:41.497 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:41.497 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.497 16:35:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:41.497 Malloc1 00:29:41.497 [2024-11-19 16:35:31.778937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:41.497 Malloc2 00:29:41.755 Malloc3 00:29:41.755 Malloc4 00:29:41.755 Malloc5 00:29:41.755 Malloc6 00:29:41.755 Malloc7 00:29:42.012 Malloc8 00:29:42.013 Malloc9 00:29:42.013 Malloc10 00:29:42.013 16:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.013 16:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:42.013 16:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:42.013 16:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:42.013 16:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=327291 00:29:42.013 16:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:42.013 16:35:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:42.013 [2024-11-19 16:35:32.294012] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:47.293 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 327111 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 327111 ']' 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 327111 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 327111 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 327111' 00:29:47.294 killing process with pid 327111 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 327111 00:29:47.294 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 327111 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 [2024-11-19 16:35:37.288828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.294 starting I/O failed: -6 00:29:47.294 [2024-11-19 16:35:37.289362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf26c0 is same with the state(6) to be set 00:29:47.294 [2024-11-19 16:35:37.289433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf26c0 is same with the state(6) to be set 00:29:47.294 [2024-11-19 16:35:37.289451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf26c0 is same with the state(6) to be set 00:29:47.294 starting I/O failed: -6 00:29:47.294 starting I/O failed: -6 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.294 starting I/O failed: -6 00:29:47.294 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.291006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.295 [2024-11-19 16:35:37.291129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf09e0 is same with the state(6) to be set 00:29:47.295 [2024-11-19 16:35:37.291164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf09e0 is same with the state(6) to be set 00:29:47.295 [2024-11-19 16:35:37.291179] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf09e0 is same with the state(6) to be set 00:29:47.295 [2024-11-19 16:35:37.291191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf09e0 is same with the state(6) to be set 00:29:47.295 [2024-11-19 16:35:37.291203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf09e0 is same with the state(6) to be set 00:29:47.295 [2024-11-19 16:35:37.291222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf09e0 is same with the state(6) to be set 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 [2024-11-19 16:35:37.291653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0eb0 is same with the state(6) to be set 00:29:47.295 [2024-11-19 16:35:37.291692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0eb0 is same with the state(6) to be set 00:29:47.295 [2024-11-19 16:35:37.291708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0eb0 is same with the state(6) to be set 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.291724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0eb0 is same with the state(6) to be set 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 [2024-11-19 16:35:37.291736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0eb0 is same with the state(6) to be set 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 [2024-11-19 16:35:37.291748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0eb0 is same with the state(6) to be set 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.291760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0eb0 is same with Write completed with error (sct=0, sc=8) 00:29:47.295 the state(6) to be set 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.291775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0eb0 is same with the state(6) to be set 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.291787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0eb0 is same with the state(6) to be set 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.292231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1380 is same with Write completed with error (sct=0, sc=8) 00:29:47.295 the state(6) to be set 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.292261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1380 is same with Write completed with error (sct=0, sc=8) 00:29:47.295 the state(6) to be set 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.292281] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1380 is same with the state(6) to be set 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 [2024-11-19 16:35:37.292293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1380 is same with the state(6) to be set 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.292306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1380 is same with the state(6) to be set 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.292318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1380 is same with the state(6) to be set 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 [2024-11-19 16:35:37.292330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1380 is same with the state(6) to be set 00:29:47.295 starting I/O failed: -6 00:29:47.295 [2024-11-19 16:35:37.292342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf1380 is same with the state(6) to be set 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.295 Write completed with error (sct=0, sc=8) 00:29:47.295 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 [2024-11-19 16:35:37.292950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 [2024-11-19 16:35:37.292984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 starting I/O failed: -6 00:29:47.296 [2024-11-19 16:35:37.293000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 [2024-11-19 16:35:37.293012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 [2024-11-19 16:35:37.293024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with Write completed with error (sct=0, sc=8) 00:29:47.296 the state(6) to be set 00:29:47.296 starting I/O failed: -6 00:29:47.296 [2024-11-19 16:35:37.293037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with Write completed with error (sct=0, sc=8) 00:29:47.296 the state(6) to be set 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 [2024-11-19 16:35:37.293065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with starting I/O failed: -6 00:29:47.296 the state(6) to be set 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 [2024-11-19 16:35:37.293101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 starting I/O failed: -6 00:29:47.296 [2024-11-19 16:35:37.293115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 [2024-11-19 16:35:37.293133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 [2024-11-19 16:35:37.293163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 [2024-11-19 16:35:37.293177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with starting I/O failed: -6 00:29:47.296 the state(6) to be set 00:29:47.296 [2024-11-19 16:35:37.293190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with Write completed with error (sct=0, sc=8) 00:29:47.296 the state(6) to be set 00:29:47.296 starting I/O failed: -6 00:29:47.296 [2024-11-19 16:35:37.293203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 [2024-11-19 16:35:37.293214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 starting I/O failed: -6 00:29:47.296 [2024-11-19 16:35:37.293226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf0510 is same with the state(6) to be set 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 [2024-11-19 16:35:37.293621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.296 NVMe io qpair process completion error 00:29:47.296 [2024-11-19 16:35:37.296675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3060 is same with the state(6) to be set 00:29:47.296 [2024-11-19 16:35:37.296715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3060 is same with the state(6) to be set 00:29:47.296 [2024-11-19 16:35:37.296730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3060 is same with the state(6) to be set 00:29:47.296 [2024-11-19 16:35:37.296742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3060 is same with the state(6) to be set 00:29:47.296 [2024-11-19 16:35:37.296754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3060 is same with the state(6) to be set 00:29:47.296 [2024-11-19 16:35:37.296765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3060 is same with the state(6) to be set 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 starting I/O failed: -6 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.296 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 [2024-11-19 16:35:37.297843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.297 [2024-11-19 16:35:37.297975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3a00 is same with the state(6) to be set 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 [2024-11-19 16:35:37.298007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3a00 is same with the state(6) to be set 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 [2024-11-19 16:35:37.298022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3a00 is same with the state(6) to be set 00:29:47.297 [2024-11-19 16:35:37.298035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3a00 is same with the state(6) to be set 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 [2024-11-19 16:35:37.298047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3a00 is same with the state(6) to be set 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 [2024-11-19 16:35:37.298059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf3a00 is same with the state(6) to be set 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 [2024-11-19 16:35:37.298157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf2b90 is same with the state(6) to be set 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 [2024-11-19 16:35:37.298185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf2b90 is same with the state(6) to be set 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 [2024-11-19 16:35:37.298199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf2b90 is same with starting I/O failed: -6 00:29:47.297 the state(6) to be set 00:29:47.297 [2024-11-19 16:35:37.298212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf2b90 is same with the state(6) to be set 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 [2024-11-19 16:35:37.298225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf2b90 is same with the state(6) to be set 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 [2024-11-19 16:35:37.298237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf2b90 is same with the state(6) to be set 00:29:47.297 starting I/O failed: -6 00:29:47.297 [2024-11-19 16:35:37.298249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf2b90 is same with the state(6) to be set 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 [2024-11-19 16:35:37.298261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cf2b90 is same with the state(6) to be set 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 [2024-11-19 16:35:37.298935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.297 starting I/O failed: -6 00:29:47.297 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 [2024-11-19 16:35:37.300136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.298 starting I/O failed: -6 00:29:47.298 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 [2024-11-19 16:35:37.301741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.299 NVMe io qpair process completion error 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.305579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b144e0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 [2024-11-19 16:35:37.305616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b144e0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.305632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b144e0 is same with the state(6) to be set 00:29:47.299 [2024-11-19 16:35:37.305646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b144e0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.305657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b144e0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.305669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b144e0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 [2024-11-19 16:35:37.305699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b144e0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.305713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b144e0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 [2024-11-19 16:35:37.306137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.306431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b149d0 is same with the state(6) to be set 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.306478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b149d0 is same with Write completed with error (sct=0, sc=8) 00:29:47.299 the state(6) to be set 00:29:47.299 starting I/O failed: -6 00:29:47.299 [2024-11-19 16:35:37.306499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b149d0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.306519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b149d0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 [2024-11-19 16:35:37.306535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b149d0 is same with the state(6) to be set 00:29:47.299 [2024-11-19 16:35:37.306547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b149d0 is same with Write completed with error (sct=0, sc=8) 00:29:47.299 the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.306624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14ec0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 [2024-11-19 16:35:37.306655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14ec0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.306669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14ec0 is same with the state(6) to be set 00:29:47.299 [2024-11-19 16:35:37.306682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14ec0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.306693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14ec0 is same with starting I/O failed: -6 00:29:47.299 the state(6) to be set 00:29:47.299 [2024-11-19 16:35:37.306706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14ec0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.306718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14ec0 is same with the state(6) to be set 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 [2024-11-19 16:35:37.306730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14ec0 is same with the state(6) to be set 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 Write completed with error (sct=0, sc=8) 00:29:47.299 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 [2024-11-19 16:35:37.307067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 [2024-11-19 16:35:37.307303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14010 is same with the state(6) to be set 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 [2024-11-19 16:35:37.307335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14010 is same with the state(6) to be set 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 [2024-11-19 16:35:37.307349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14010 is same with the state(6) to be set 00:29:47.300 starting I/O failed: -6 00:29:47.300 [2024-11-19 16:35:37.307363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14010 is same with the state(6) to be set 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 [2024-11-19 16:35:37.307381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14010 is same with the state(6) to be set 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 [2024-11-19 16:35:37.307393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b14010 is same with the state(6) to be set 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 [2024-11-19 16:35:37.308346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.300 Write completed with error (sct=0, sc=8) 00:29:47.300 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 [2024-11-19 16:35:37.310000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.301 NVMe io qpair process completion error 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 [2024-11-19 16:35:37.311209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.301 starting I/O failed: -6 00:29:47.301 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 [2024-11-19 16:35:37.312164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 [2024-11-19 16:35:37.313490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.302 Write completed with error (sct=0, sc=8) 00:29:47.302 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 [2024-11-19 16:35:37.315542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.303 NVMe io qpair process completion error 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 [2024-11-19 16:35:37.316762] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.303 starting I/O failed: -6 00:29:47.303 starting I/O failed: -6 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 [2024-11-19 16:35:37.317827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 starting I/O failed: -6 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.303 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 [2024-11-19 16:35:37.319140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.304 starting I/O failed: -6 00:29:47.304 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 [2024-11-19 16:35:37.321162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.305 NVMe io qpair process completion error 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 [2024-11-19 16:35:37.322456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 [2024-11-19 16:35:37.323567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 starting I/O failed: -6 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.305 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 [2024-11-19 16:35:37.324714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.306 Write completed with error (sct=0, sc=8) 00:29:47.306 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 [2024-11-19 16:35:37.327605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.307 NVMe io qpair process completion error 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 [2024-11-19 16:35:37.328958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 Write completed with error (sct=0, sc=8) 00:29:47.307 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 [2024-11-19 16:35:37.330064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 [2024-11-19 16:35:37.331228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.308 starting I/O failed: -6 00:29:47.308 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 [2024-11-19 16:35:37.334053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.309 NVMe io qpair process completion error 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 [2024-11-19 16:35:37.335444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.309 Write completed with error (sct=0, sc=8) 00:29:47.309 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 [2024-11-19 16:35:37.336431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 [2024-11-19 16:35:37.337666] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.310 starting I/O failed: -6 00:29:47.310 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 [2024-11-19 16:35:37.341831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.311 NVMe io qpair process completion error 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.311 Write completed with error (sct=0, sc=8) 00:29:47.311 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 [2024-11-19 16:35:37.344774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.312 Write completed with error (sct=0, sc=8) 00:29:47.312 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 [2024-11-19 16:35:37.347319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:47.313 NVMe io qpair process completion error 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.313 starting I/O failed: -6 00:29:47.313 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 [2024-11-19 16:35:37.350401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.314 starting I/O failed: -6 00:29:47.314 Write completed with error (sct=0, sc=8) 00:29:47.315 starting I/O failed: -6 00:29:47.315 Write completed with error (sct=0, sc=8) 00:29:47.315 starting I/O failed: -6 00:29:47.315 Write completed with error (sct=0, sc=8) 00:29:47.315 starting I/O failed: -6 00:29:47.315 Write completed with error (sct=0, sc=8) 00:29:47.315 starting I/O failed: -6 00:29:47.315 Write completed with error (sct=0, sc=8) 00:29:47.315 starting I/O failed: -6 00:29:47.315 Write completed with error (sct=0, sc=8) 00:29:47.315 starting I/O failed: -6 00:29:47.315 Write completed with error (sct=0, sc=8) 00:29:47.315 starting I/O failed: -6 00:29:47.315 Write completed with error (sct=0, sc=8) 00:29:47.315 starting I/O failed: -6 00:29:47.315 Write completed with error (sct=0, sc=8) 00:29:47.315 starting I/O failed: -6 00:29:47.315 [2024-11-19 16:35:37.352383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:47.315 NVMe io qpair process completion error 00:29:47.315 Initializing NVMe Controllers 00:29:47.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:47.315 Controller IO queue size 128, less than required. 00:29:47.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:47.315 Controller IO queue size 128, less than required. 00:29:47.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:47.315 Controller IO queue size 128, less than required. 00:29:47.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:47.315 Controller IO queue size 128, less than required. 00:29:47.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:47.315 Controller IO queue size 128, less than required. 00:29:47.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:47.315 Controller IO queue size 128, less than required. 00:29:47.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:47.315 Controller IO queue size 128, less than required. 00:29:47.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:47.315 Controller IO queue size 128, less than required. 00:29:47.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:47.315 Controller IO queue size 128, less than required. 00:29:47.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.315 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:47.315 Controller IO queue size 128, less than required. 00:29:47.315 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:47.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:47.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:47.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:47.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:47.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:47.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:47.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:47.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:47.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:47.315 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:47.315 Initialization complete. Launching workers. 00:29:47.315 ======================================================== 00:29:47.315 Latency(us) 00:29:47.315 Device Information : IOPS MiB/s Average min max 00:29:47.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1894.98 81.42 66824.46 884.24 109599.89 00:29:47.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1836.50 78.91 69657.18 972.39 151108.73 00:29:47.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1846.06 79.32 68607.09 930.40 151412.77 00:29:47.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1835.19 78.86 69035.34 804.66 121348.68 00:29:47.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1846.06 79.32 68656.31 803.20 120651.55 00:29:47.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1805.62 77.59 70222.21 1107.89 123235.50 00:29:47.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1808.23 77.70 70163.38 929.62 126760.20 00:29:47.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1806.93 77.64 70259.55 983.64 130382.42 00:29:47.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1765.62 75.87 71122.10 972.08 117118.92 00:29:47.315 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1822.15 78.30 69666.58 756.29 133935.78 00:29:47.315 ======================================================== 00:29:47.315 Total : 18267.34 784.92 69401.66 756.29 151412.77 00:29:47.315 00:29:47.315 [2024-11-19 16:35:37.357296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101a140 is same with the state(6) to be set 00:29:47.315 [2024-11-19 16:35:37.357397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032c40 is same with the state(6) to be set 00:29:47.315 [2024-11-19 16:35:37.357457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1023f40 is same with the state(6) to be set 00:29:47.315 [2024-11-19 16:35:37.357514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x102dd40 is same with the state(6) to be set 00:29:47.315 [2024-11-19 16:35:37.357576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1037b40 is same with the state(6) to be set 00:29:47.315 [2024-11-19 16:35:37.357632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1028e40 is same with the state(6) to be set 00:29:47.315 [2024-11-19 16:35:37.357687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x103ca40 is same with the state(6) to be set 00:29:47.315 [2024-11-19 16:35:37.357745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1015240 is same with the state(6) to be set 00:29:47.315 [2024-11-19 16:35:37.357803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1010330 is same with the state(6) to be set 00:29:47.315 [2024-11-19 16:35:37.357860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x101f040 is same with the state(6) to be set 00:29:47.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:47.574 16:35:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 327291 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 327291 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 327291 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:48.509 rmmod nvme_tcp 00:29:48.509 rmmod nvme_fabrics 00:29:48.509 rmmod nvme_keyring 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 327111 ']' 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 327111 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 327111 ']' 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 327111 00:29:48.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (327111) - No such process 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 327111 is not found' 00:29:48.509 Process with pid 327111 is not found 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.509 16:35:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:51.049 00:29:51.049 real 0m9.700s 00:29:51.049 user 0m22.937s 00:29:51.049 sys 0m5.811s 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:51.049 ************************************ 00:29:51.049 END TEST nvmf_shutdown_tc4 00:29:51.049 ************************************ 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:51.049 00:29:51.049 real 0m36.891s 00:29:51.049 user 1m38.462s 00:29:51.049 sys 0m12.016s 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:51.049 ************************************ 00:29:51.049 END TEST nvmf_shutdown 00:29:51.049 ************************************ 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:51.049 ************************************ 00:29:51.049 START TEST nvmf_nsid 00:29:51.049 ************************************ 00:29:51.049 16:35:40 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:51.049 * Looking for test storage... 00:29:51.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:51.049 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:51.049 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:51.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.050 --rc genhtml_branch_coverage=1 00:29:51.050 --rc genhtml_function_coverage=1 00:29:51.050 --rc genhtml_legend=1 00:29:51.050 --rc geninfo_all_blocks=1 00:29:51.050 --rc geninfo_unexecuted_blocks=1 00:29:51.050 00:29:51.050 ' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:51.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.050 --rc genhtml_branch_coverage=1 00:29:51.050 --rc genhtml_function_coverage=1 00:29:51.050 --rc genhtml_legend=1 00:29:51.050 --rc geninfo_all_blocks=1 00:29:51.050 --rc geninfo_unexecuted_blocks=1 00:29:51.050 00:29:51.050 ' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:51.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.050 --rc genhtml_branch_coverage=1 00:29:51.050 --rc genhtml_function_coverage=1 00:29:51.050 --rc genhtml_legend=1 00:29:51.050 --rc geninfo_all_blocks=1 00:29:51.050 --rc geninfo_unexecuted_blocks=1 00:29:51.050 00:29:51.050 ' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:51.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:51.050 --rc genhtml_branch_coverage=1 00:29:51.050 --rc genhtml_function_coverage=1 00:29:51.050 --rc genhtml_legend=1 00:29:51.050 --rc geninfo_all_blocks=1 00:29:51.050 --rc geninfo_unexecuted_blocks=1 00:29:51.050 00:29:51.050 ' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:51.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:51.050 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:51.051 16:35:41 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:52.953 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:52.954 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:52.954 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:52.954 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:52.954 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.954 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:53.213 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:53.213 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:29:53.213 00:29:53.213 --- 10.0.0.2 ping statistics --- 00:29:53.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.213 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:53.213 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:53.213 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.084 ms 00:29:53.213 00:29:53.213 --- 10.0.0.1 ping statistics --- 00:29:53.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:53.213 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=329911 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 329911 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 329911 ']' 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:53.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.213 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:53.213 [2024-11-19 16:35:43.474254] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:53.213 [2024-11-19 16:35:43.474351] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:53.213 [2024-11-19 16:35:43.548971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.471 [2024-11-19 16:35:43.595714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:53.471 [2024-11-19 16:35:43.595768] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:53.471 [2024-11-19 16:35:43.595796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:53.471 [2024-11-19 16:35:43.595807] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:53.471 [2024-11-19 16:35:43.595817] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:53.471 [2024-11-19 16:35:43.596484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=330050 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=55b34d99-6bb5-4b37-b27d-e3fddc7bf0c4 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=4f5cc4dd-51ba-4ee0-8e7f-c6641031b42f 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=9eeffda1-3691-40d1-bc06-9da940703b85 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:53.471 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:53.471 null0 00:29:53.471 null1 00:29:53.471 null2 00:29:53.471 [2024-11-19 16:35:43.774827] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.471 [2024-11-19 16:35:43.786003] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:29:53.471 [2024-11-19 16:35:43.786082] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid330050 ] 00:29:53.471 [2024-11-19 16:35:43.799089] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.730 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:53.730 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 330050 /var/tmp/tgt2.sock 00:29:53.730 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 330050 ']' 00:29:53.730 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:53.730 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:53.730 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:53.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:53.730 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:53.730 16:35:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:53.730 [2024-11-19 16:35:43.853606] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.730 [2024-11-19 16:35:43.900752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:53.988 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:53.988 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:53.988 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:54.247 [2024-11-19 16:35:44.533538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.247 [2024-11-19 16:35:44.549724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:54.247 nvme0n1 nvme0n2 00:29:54.247 nvme1n1 00:29:54.507 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:54.507 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:54.507 16:35:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:55.077 16:35:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 55b34d99-6bb5-4b37-b27d-e3fddc7bf0c4 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=55b34d996bb54b37b27de3fddc7bf0c4 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 55B34D996BB54B37B27DE3FDDC7BF0C4 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 55B34D996BB54B37B27DE3FDDC7BF0C4 == \5\5\B\3\4\D\9\9\6\B\B\5\4\B\3\7\B\2\7\D\E\3\F\D\D\C\7\B\F\0\C\4 ]] 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 4f5cc4dd-51ba-4ee0-8e7f-c6641031b42f 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=4f5cc4dd51ba4ee08e7fc6641031b42f 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 4F5CC4DD51BA4EE08E7FC6641031B42F 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 4F5CC4DD51BA4EE08E7FC6641031B42F == \4\F\5\C\C\4\D\D\5\1\B\A\4\E\E\0\8\E\7\F\C\6\6\4\1\0\3\1\B\4\2\F ]] 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 9eeffda1-3691-40d1-bc06-9da940703b85 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:56.018 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=9eeffda1369140d1bc069da940703b85 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 9EEFFDA1369140D1BC069DA940703B85 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 9EEFFDA1369140D1BC069DA940703B85 == \9\E\E\F\F\D\A\1\3\6\9\1\4\0\D\1\B\C\0\6\9\D\A\9\4\0\7\0\3\B\8\5 ]] 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 330050 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 330050 ']' 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 330050 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 330050 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 330050' 00:29:56.278 killing process with pid 330050 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 330050 00:29:56.278 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 330050 00:29:56.849 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:56.850 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:56.850 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:56.850 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:56.850 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:56.850 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:56.850 16:35:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:56.850 rmmod nvme_tcp 00:29:56.850 rmmod nvme_fabrics 00:29:56.850 rmmod nvme_keyring 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 329911 ']' 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 329911 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 329911 ']' 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 329911 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 329911 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 329911' 00:29:56.850 killing process with pid 329911 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 329911 00:29:56.850 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 329911 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.109 16:35:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.010 16:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.010 00:29:59.010 real 0m8.357s 00:29:59.010 user 0m8.132s 00:29:59.010 sys 0m2.711s 00:29:59.010 16:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.010 16:35:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:59.010 ************************************ 00:29:59.010 END TEST nvmf_nsid 00:29:59.010 ************************************ 00:29:59.010 16:35:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:59.010 00:29:59.010 real 18m13.165s 00:29:59.010 user 50m39.730s 00:29:59.010 sys 3m58.085s 00:29:59.010 16:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.010 16:35:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:59.010 ************************************ 00:29:59.010 END TEST nvmf_target_extra 00:29:59.010 ************************************ 00:29:59.270 16:35:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:59.270 16:35:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:59.270 16:35:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.270 16:35:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:59.270 ************************************ 00:29:59.270 START TEST nvmf_host 00:29:59.270 ************************************ 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:59.270 * Looking for test storage... 00:29:59.270 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.270 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:59.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.271 --rc genhtml_branch_coverage=1 00:29:59.271 --rc genhtml_function_coverage=1 00:29:59.271 --rc genhtml_legend=1 00:29:59.271 --rc geninfo_all_blocks=1 00:29:59.271 --rc geninfo_unexecuted_blocks=1 00:29:59.271 00:29:59.271 ' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:59.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.271 --rc genhtml_branch_coverage=1 00:29:59.271 --rc genhtml_function_coverage=1 00:29:59.271 --rc genhtml_legend=1 00:29:59.271 --rc geninfo_all_blocks=1 00:29:59.271 --rc geninfo_unexecuted_blocks=1 00:29:59.271 00:29:59.271 ' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:59.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.271 --rc genhtml_branch_coverage=1 00:29:59.271 --rc genhtml_function_coverage=1 00:29:59.271 --rc genhtml_legend=1 00:29:59.271 --rc geninfo_all_blocks=1 00:29:59.271 --rc geninfo_unexecuted_blocks=1 00:29:59.271 00:29:59.271 ' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:59.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.271 --rc genhtml_branch_coverage=1 00:29:59.271 --rc genhtml_function_coverage=1 00:29:59.271 --rc genhtml_legend=1 00:29:59.271 --rc geninfo_all_blocks=1 00:29:59.271 --rc geninfo_unexecuted_blocks=1 00:29:59.271 00:29:59.271 ' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:59.271 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.271 ************************************ 00:29:59.271 START TEST nvmf_multicontroller 00:29:59.271 ************************************ 00:29:59.271 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:59.271 * Looking for test storage... 00:29:59.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.531 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:59.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.531 --rc genhtml_branch_coverage=1 00:29:59.531 --rc genhtml_function_coverage=1 00:29:59.532 --rc genhtml_legend=1 00:29:59.532 --rc geninfo_all_blocks=1 00:29:59.532 --rc geninfo_unexecuted_blocks=1 00:29:59.532 00:29:59.532 ' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:59.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.532 --rc genhtml_branch_coverage=1 00:29:59.532 --rc genhtml_function_coverage=1 00:29:59.532 --rc genhtml_legend=1 00:29:59.532 --rc geninfo_all_blocks=1 00:29:59.532 --rc geninfo_unexecuted_blocks=1 00:29:59.532 00:29:59.532 ' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:59.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.532 --rc genhtml_branch_coverage=1 00:29:59.532 --rc genhtml_function_coverage=1 00:29:59.532 --rc genhtml_legend=1 00:29:59.532 --rc geninfo_all_blocks=1 00:29:59.532 --rc geninfo_unexecuted_blocks=1 00:29:59.532 00:29:59.532 ' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:59.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.532 --rc genhtml_branch_coverage=1 00:29:59.532 --rc genhtml_function_coverage=1 00:29:59.532 --rc genhtml_legend=1 00:29:59.532 --rc geninfo_all_blocks=1 00:29:59.532 --rc geninfo_unexecuted_blocks=1 00:29:59.532 00:29:59.532 ' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:59.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:59.532 16:35:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:01.441 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:01.441 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:01.441 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:01.441 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:01.441 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:01.700 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:01.700 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:30:01.700 00:30:01.700 --- 10.0.0.2 ping statistics --- 00:30:01.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.700 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:01.700 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:01.700 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:30:01.700 00:30:01.700 --- 10.0.0.1 ping statistics --- 00:30:01.700 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:01.700 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=332483 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 332483 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 332483 ']' 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.700 16:35:51 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.700 [2024-11-19 16:35:51.969466] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:30:01.700 [2024-11-19 16:35:51.969544] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.960 [2024-11-19 16:35:52.048221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:01.960 [2024-11-19 16:35:52.095992] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.960 [2024-11-19 16:35:52.096046] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.960 [2024-11-19 16:35:52.096083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.960 [2024-11-19 16:35:52.096095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.960 [2024-11-19 16:35:52.096105] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.960 [2024-11-19 16:35:52.097639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.960 [2024-11-19 16:35:52.097706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.960 [2024-11-19 16:35:52.097709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.960 [2024-11-19 16:35:52.241783] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.960 Malloc0 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.960 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.220 [2024-11-19 16:35:52.309450] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.220 [2024-11-19 16:35:52.317271] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.220 Malloc1 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=332516 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 332516 /var/tmp/bdevperf.sock 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 332516 ']' 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:02.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:02.220 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.479 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.479 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:30:02.479 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:02.479 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.479 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.740 NVMe0n1 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.740 1 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.740 request: 00:30:02.740 { 00:30:02.740 "name": "NVMe0", 00:30:02.740 "trtype": "tcp", 00:30:02.740 "traddr": "10.0.0.2", 00:30:02.740 "adrfam": "ipv4", 00:30:02.740 "trsvcid": "4420", 00:30:02.740 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.740 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:30:02.740 "hostaddr": "10.0.0.1", 00:30:02.740 "prchk_reftag": false, 00:30:02.740 "prchk_guard": false, 00:30:02.740 "hdgst": false, 00:30:02.740 "ddgst": false, 00:30:02.740 "allow_unrecognized_csi": false, 00:30:02.740 "method": "bdev_nvme_attach_controller", 00:30:02.740 "req_id": 1 00:30:02.740 } 00:30:02.740 Got JSON-RPC error response 00:30:02.740 response: 00:30:02.740 { 00:30:02.740 "code": -114, 00:30:02.740 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:02.740 } 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.740 request: 00:30:02.740 { 00:30:02.740 "name": "NVMe0", 00:30:02.740 "trtype": "tcp", 00:30:02.740 "traddr": "10.0.0.2", 00:30:02.740 "adrfam": "ipv4", 00:30:02.740 "trsvcid": "4420", 00:30:02.740 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:02.740 "hostaddr": "10.0.0.1", 00:30:02.740 "prchk_reftag": false, 00:30:02.740 "prchk_guard": false, 00:30:02.740 "hdgst": false, 00:30:02.740 "ddgst": false, 00:30:02.740 "allow_unrecognized_csi": false, 00:30:02.740 "method": "bdev_nvme_attach_controller", 00:30:02.740 "req_id": 1 00:30:02.740 } 00:30:02.740 Got JSON-RPC error response 00:30:02.740 response: 00:30:02.740 { 00:30:02.740 "code": -114, 00:30:02.740 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:02.740 } 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:30:02.740 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.741 request: 00:30:02.741 { 00:30:02.741 "name": "NVMe0", 00:30:02.741 "trtype": "tcp", 00:30:02.741 "traddr": "10.0.0.2", 00:30:02.741 "adrfam": "ipv4", 00:30:02.741 "trsvcid": "4420", 00:30:02.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.741 "hostaddr": "10.0.0.1", 00:30:02.741 "prchk_reftag": false, 00:30:02.741 "prchk_guard": false, 00:30:02.741 "hdgst": false, 00:30:02.741 "ddgst": false, 00:30:02.741 "multipath": "disable", 00:30:02.741 "allow_unrecognized_csi": false, 00:30:02.741 "method": "bdev_nvme_attach_controller", 00:30:02.741 "req_id": 1 00:30:02.741 } 00:30:02.741 Got JSON-RPC error response 00:30:02.741 response: 00:30:02.741 { 00:30:02.741 "code": -114, 00:30:02.741 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:30:02.741 } 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:02.741 request: 00:30:02.741 { 00:30:02.741 "name": "NVMe0", 00:30:02.741 "trtype": "tcp", 00:30:02.741 "traddr": "10.0.0.2", 00:30:02.741 "adrfam": "ipv4", 00:30:02.741 "trsvcid": "4420", 00:30:02.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:02.741 "hostaddr": "10.0.0.1", 00:30:02.741 "prchk_reftag": false, 00:30:02.741 "prchk_guard": false, 00:30:02.741 "hdgst": false, 00:30:02.741 "ddgst": false, 00:30:02.741 "multipath": "failover", 00:30:02.741 "allow_unrecognized_csi": false, 00:30:02.741 "method": "bdev_nvme_attach_controller", 00:30:02.741 "req_id": 1 00:30:02.741 } 00:30:02.741 Got JSON-RPC error response 00:30:02.741 response: 00:30:02.741 { 00:30:02.741 "code": -114, 00:30:02.741 "message": "A controller named NVMe0 already exists with the specified network path" 00:30:02.741 } 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.741 16:35:52 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:03.017 NVMe0n1 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:03.017 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:30:03.017 16:35:53 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:04.397 { 00:30:04.397 "results": [ 00:30:04.397 { 00:30:04.397 "job": "NVMe0n1", 00:30:04.397 "core_mask": "0x1", 00:30:04.397 "workload": "write", 00:30:04.397 "status": "finished", 00:30:04.397 "queue_depth": 128, 00:30:04.397 "io_size": 4096, 00:30:04.397 "runtime": 1.006588, 00:30:04.397 "iops": 17476.862430309124, 00:30:04.397 "mibps": 68.26899386839501, 00:30:04.397 "io_failed": 0, 00:30:04.397 "io_timeout": 0, 00:30:04.397 "avg_latency_us": 7304.7534037357045, 00:30:04.397 "min_latency_us": 4150.613333333334, 00:30:04.397 "max_latency_us": 17184.995555555557 00:30:04.397 } 00:30:04.397 ], 00:30:04.397 "core_count": 1 00:30:04.397 } 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 332516 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 332516 ']' 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 332516 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 332516 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 332516' 00:30:04.397 killing process with pid 332516 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 332516 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 332516 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.397 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:30:04.656 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:04.656 [2024-11-19 16:35:52.424617] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:30:04.656 [2024-11-19 16:35:52.424703] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid332516 ] 00:30:04.656 [2024-11-19 16:35:52.492488] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.656 [2024-11-19 16:35:52.538667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.656 [2024-11-19 16:35:53.292157] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name e0f0cd9b-d248-4b3a-8b63-9753153e7010 already exists 00:30:04.656 [2024-11-19 16:35:53.292196] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:e0f0cd9b-d248-4b3a-8b63-9753153e7010 alias for bdev NVMe1n1 00:30:04.656 [2024-11-19 16:35:53.292220] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:30:04.656 Running I/O for 1 seconds... 00:30:04.656 17400.00 IOPS, 67.97 MiB/s 00:30:04.656 Latency(us) 00:30:04.656 [2024-11-19T15:35:54.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.656 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:30:04.656 NVMe0n1 : 1.01 17476.86 68.27 0.00 0.00 7304.75 4150.61 17185.00 00:30:04.656 [2024-11-19T15:35:54.995Z] =================================================================================================================== 00:30:04.656 [2024-11-19T15:35:54.995Z] Total : 17476.86 68.27 0.00 0.00 7304.75 4150.61 17185.00 00:30:04.656 Received shutdown signal, test time was about 1.000000 seconds 00:30:04.656 00:30:04.656 Latency(us) 00:30:04.656 [2024-11-19T15:35:54.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:04.656 [2024-11-19T15:35:54.995Z] =================================================================================================================== 00:30:04.656 [2024-11-19T15:35:54.995Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:04.656 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.656 rmmod nvme_tcp 00:30:04.656 rmmod nvme_fabrics 00:30:04.656 rmmod nvme_keyring 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 332483 ']' 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 332483 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 332483 ']' 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 332483 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 332483 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 332483' 00:30:04.656 killing process with pid 332483 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 332483 00:30:04.656 16:35:54 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 332483 00:30:04.915 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.915 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.915 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.915 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:30:04.915 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:30:04.915 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.915 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.915 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.915 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.916 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.916 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.916 16:35:55 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.818 16:35:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.818 00:30:06.818 real 0m7.572s 00:30:06.818 user 0m12.042s 00:30:06.818 sys 0m2.418s 00:30:06.818 16:35:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.818 16:35:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:30:06.818 ************************************ 00:30:06.818 END TEST nvmf_multicontroller 00:30:06.818 ************************************ 00:30:07.077 16:35:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:07.077 16:35:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:07.077 16:35:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.077 16:35:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.077 ************************************ 00:30:07.077 START TEST nvmf_aer 00:30:07.077 ************************************ 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:30:07.078 * Looking for test storage... 00:30:07.078 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:07.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.078 --rc genhtml_branch_coverage=1 00:30:07.078 --rc genhtml_function_coverage=1 00:30:07.078 --rc genhtml_legend=1 00:30:07.078 --rc geninfo_all_blocks=1 00:30:07.078 --rc geninfo_unexecuted_blocks=1 00:30:07.078 00:30:07.078 ' 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:07.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.078 --rc genhtml_branch_coverage=1 00:30:07.078 --rc genhtml_function_coverage=1 00:30:07.078 --rc genhtml_legend=1 00:30:07.078 --rc geninfo_all_blocks=1 00:30:07.078 --rc geninfo_unexecuted_blocks=1 00:30:07.078 00:30:07.078 ' 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:07.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.078 --rc genhtml_branch_coverage=1 00:30:07.078 --rc genhtml_function_coverage=1 00:30:07.078 --rc genhtml_legend=1 00:30:07.078 --rc geninfo_all_blocks=1 00:30:07.078 --rc geninfo_unexecuted_blocks=1 00:30:07.078 00:30:07.078 ' 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:07.078 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.078 --rc genhtml_branch_coverage=1 00:30:07.078 --rc genhtml_function_coverage=1 00:30:07.078 --rc genhtml_legend=1 00:30:07.078 --rc geninfo_all_blocks=1 00:30:07.078 --rc geninfo_unexecuted_blocks=1 00:30:07.078 00:30:07.078 ' 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.078 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:07.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:30:07.079 16:35:57 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:09.613 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:09.613 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:09.613 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:09.613 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:09.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:09.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.295 ms 00:30:09.613 00:30:09.613 --- 10.0.0.2 ping statistics --- 00:30:09.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.613 rtt min/avg/max/mdev = 0.295/0.295/0.295/0.000 ms 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:09.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:09.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:30:09.613 00:30:09.613 --- 10.0.0.1 ping statistics --- 00:30:09.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:09.613 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:09.613 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=334808 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 334808 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 334808 ']' 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:09.614 [2024-11-19 16:35:59.686832] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:30:09.614 [2024-11-19 16:35:59.686905] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.614 [2024-11-19 16:35:59.761349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:09.614 [2024-11-19 16:35:59.813477] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:09.614 [2024-11-19 16:35:59.813533] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:09.614 [2024-11-19 16:35:59.813561] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:09.614 [2024-11-19 16:35:59.813573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:09.614 [2024-11-19 16:35:59.813582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:09.614 [2024-11-19 16:35:59.815295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:09.614 [2024-11-19 16:35:59.815352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:09.614 [2024-11-19 16:35:59.815405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:09.614 [2024-11-19 16:35:59.815408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:09.614 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:09.872 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:09.872 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:09.872 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.872 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:09.872 [2024-11-19 16:35:59.967567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.872 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.872 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:30:09.872 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.872 16:35:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:09.872 Malloc0 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:09.872 [2024-11-19 16:36:00.034903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:09.872 [ 00:30:09.872 { 00:30:09.872 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:09.872 "subtype": "Discovery", 00:30:09.872 "listen_addresses": [], 00:30:09.872 "allow_any_host": true, 00:30:09.872 "hosts": [] 00:30:09.872 }, 00:30:09.872 { 00:30:09.872 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:09.872 "subtype": "NVMe", 00:30:09.872 "listen_addresses": [ 00:30:09.872 { 00:30:09.872 "trtype": "TCP", 00:30:09.872 "adrfam": "IPv4", 00:30:09.872 "traddr": "10.0.0.2", 00:30:09.872 "trsvcid": "4420" 00:30:09.872 } 00:30:09.872 ], 00:30:09.872 "allow_any_host": true, 00:30:09.872 "hosts": [], 00:30:09.872 "serial_number": "SPDK00000000000001", 00:30:09.872 "model_number": "SPDK bdev Controller", 00:30:09.872 "max_namespaces": 2, 00:30:09.872 "min_cntlid": 1, 00:30:09.872 "max_cntlid": 65519, 00:30:09.872 "namespaces": [ 00:30:09.872 { 00:30:09.872 "nsid": 1, 00:30:09.872 "bdev_name": "Malloc0", 00:30:09.872 "name": "Malloc0", 00:30:09.872 "nguid": "740382E02A87403D9CF135439E98D6B9", 00:30:09.872 "uuid": "740382e0-2a87-403d-9cf1-35439e98d6b9" 00:30:09.872 } 00:30:09.872 ] 00:30:09.872 } 00:30:09.872 ] 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=334889 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:30:09.872 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.130 Malloc1 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.130 Asynchronous Event Request test 00:30:10.130 Attaching to 10.0.0.2 00:30:10.130 Attached to 10.0.0.2 00:30:10.130 Registering asynchronous event callbacks... 00:30:10.130 Starting namespace attribute notice tests for all controllers... 00:30:10.130 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:30:10.130 aer_cb - Changed Namespace 00:30:10.130 Cleaning up... 00:30:10.130 [ 00:30:10.130 { 00:30:10.130 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:10.130 "subtype": "Discovery", 00:30:10.130 "listen_addresses": [], 00:30:10.130 "allow_any_host": true, 00:30:10.130 "hosts": [] 00:30:10.130 }, 00:30:10.130 { 00:30:10.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:10.130 "subtype": "NVMe", 00:30:10.130 "listen_addresses": [ 00:30:10.130 { 00:30:10.130 "trtype": "TCP", 00:30:10.130 "adrfam": "IPv4", 00:30:10.130 "traddr": "10.0.0.2", 00:30:10.130 "trsvcid": "4420" 00:30:10.130 } 00:30:10.130 ], 00:30:10.130 "allow_any_host": true, 00:30:10.130 "hosts": [], 00:30:10.130 "serial_number": "SPDK00000000000001", 00:30:10.130 "model_number": "SPDK bdev Controller", 00:30:10.130 "max_namespaces": 2, 00:30:10.130 "min_cntlid": 1, 00:30:10.130 "max_cntlid": 65519, 00:30:10.130 "namespaces": [ 00:30:10.130 { 00:30:10.130 "nsid": 1, 00:30:10.130 "bdev_name": "Malloc0", 00:30:10.130 "name": "Malloc0", 00:30:10.130 "nguid": "740382E02A87403D9CF135439E98D6B9", 00:30:10.130 "uuid": "740382e0-2a87-403d-9cf1-35439e98d6b9" 00:30:10.130 }, 00:30:10.130 { 00:30:10.130 "nsid": 2, 00:30:10.130 "bdev_name": "Malloc1", 00:30:10.130 "name": "Malloc1", 00:30:10.130 "nguid": "7CF3F2B341F64589BAAD04225A3F75F2", 00:30:10.130 "uuid": "7cf3f2b3-41f6-4589-baad-04225a3f75f2" 00:30:10.130 } 00:30:10.130 ] 00:30:10.130 } 00:30:10.130 ] 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 334889 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.130 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:10.388 rmmod nvme_tcp 00:30:10.388 rmmod nvme_fabrics 00:30:10.388 rmmod nvme_keyring 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 334808 ']' 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 334808 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 334808 ']' 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 334808 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 334808 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 334808' 00:30:10.388 killing process with pid 334808 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 334808 00:30:10.388 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 334808 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:10.646 16:36:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.550 16:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:12.550 00:30:12.550 real 0m5.648s 00:30:12.550 user 0m4.724s 00:30:12.550 sys 0m2.067s 00:30:12.550 16:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:12.550 16:36:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:30:12.551 ************************************ 00:30:12.551 END TEST nvmf_aer 00:30:12.551 ************************************ 00:30:12.551 16:36:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:12.551 16:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:12.551 16:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:12.551 16:36:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:12.810 ************************************ 00:30:12.810 START TEST nvmf_async_init 00:30:12.810 ************************************ 00:30:12.810 16:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:30:12.810 * Looking for test storage... 00:30:12.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:12.810 16:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:12.810 16:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:30:12.810 16:36:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:12.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.810 --rc genhtml_branch_coverage=1 00:30:12.810 --rc genhtml_function_coverage=1 00:30:12.810 --rc genhtml_legend=1 00:30:12.810 --rc geninfo_all_blocks=1 00:30:12.810 --rc geninfo_unexecuted_blocks=1 00:30:12.810 00:30:12.810 ' 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:12.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.810 --rc genhtml_branch_coverage=1 00:30:12.810 --rc genhtml_function_coverage=1 00:30:12.810 --rc genhtml_legend=1 00:30:12.810 --rc geninfo_all_blocks=1 00:30:12.810 --rc geninfo_unexecuted_blocks=1 00:30:12.810 00:30:12.810 ' 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:12.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.810 --rc genhtml_branch_coverage=1 00:30:12.810 --rc genhtml_function_coverage=1 00:30:12.810 --rc genhtml_legend=1 00:30:12.810 --rc geninfo_all_blocks=1 00:30:12.810 --rc geninfo_unexecuted_blocks=1 00:30:12.810 00:30:12.810 ' 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:12.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:12.810 --rc genhtml_branch_coverage=1 00:30:12.810 --rc genhtml_function_coverage=1 00:30:12.810 --rc genhtml_legend=1 00:30:12.810 --rc geninfo_all_blocks=1 00:30:12.810 --rc geninfo_unexecuted_blocks=1 00:30:12.810 00:30:12.810 ' 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:12.810 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:12.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=77cbf1eac416453787816368319d3047 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:30:12.811 16:36:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:15.347 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:15.347 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:15.347 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:15.347 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:15.347 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:15.347 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:30:15.347 00:30:15.347 --- 10.0.0.2 ping statistics --- 00:30:15.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.347 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:30:15.347 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:15.347 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:15.347 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:30:15.347 00:30:15.347 --- 10.0.0.1 ping statistics --- 00:30:15.347 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:15.347 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=337019 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 337019 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 337019 ']' 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:15.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.348 [2024-11-19 16:36:05.373476] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:30:15.348 [2024-11-19 16:36:05.373581] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:15.348 [2024-11-19 16:36:05.448084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:15.348 [2024-11-19 16:36:05.493141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:15.348 [2024-11-19 16:36:05.493195] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:15.348 [2024-11-19 16:36:05.493225] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:15.348 [2024-11-19 16:36:05.493236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:15.348 [2024-11-19 16:36:05.493246] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:15.348 [2024-11-19 16:36:05.493858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.348 [2024-11-19 16:36:05.635708] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.348 null0 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 77cbf1eac416453787816368319d3047 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.348 [2024-11-19 16:36:05.675999] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.348 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.607 nvme0n1 00:30:15.607 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.607 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:15.607 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.607 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.607 [ 00:30:15.607 { 00:30:15.607 "name": "nvme0n1", 00:30:15.607 "aliases": [ 00:30:15.607 "77cbf1ea-c416-4537-8781-6368319d3047" 00:30:15.607 ], 00:30:15.607 "product_name": "NVMe disk", 00:30:15.607 "block_size": 512, 00:30:15.607 "num_blocks": 2097152, 00:30:15.607 "uuid": "77cbf1ea-c416-4537-8781-6368319d3047", 00:30:15.607 "numa_id": 0, 00:30:15.607 "assigned_rate_limits": { 00:30:15.607 "rw_ios_per_sec": 0, 00:30:15.607 "rw_mbytes_per_sec": 0, 00:30:15.607 "r_mbytes_per_sec": 0, 00:30:15.607 "w_mbytes_per_sec": 0 00:30:15.607 }, 00:30:15.607 "claimed": false, 00:30:15.607 "zoned": false, 00:30:15.607 "supported_io_types": { 00:30:15.607 "read": true, 00:30:15.607 "write": true, 00:30:15.607 "unmap": false, 00:30:15.607 "flush": true, 00:30:15.607 "reset": true, 00:30:15.607 "nvme_admin": true, 00:30:15.607 "nvme_io": true, 00:30:15.607 "nvme_io_md": false, 00:30:15.607 "write_zeroes": true, 00:30:15.607 "zcopy": false, 00:30:15.607 "get_zone_info": false, 00:30:15.607 "zone_management": false, 00:30:15.607 "zone_append": false, 00:30:15.607 "compare": true, 00:30:15.607 "compare_and_write": true, 00:30:15.607 "abort": true, 00:30:15.607 "seek_hole": false, 00:30:15.607 "seek_data": false, 00:30:15.607 "copy": true, 00:30:15.607 "nvme_iov_md": false 00:30:15.607 }, 00:30:15.607 "memory_domains": [ 00:30:15.607 { 00:30:15.607 "dma_device_id": "system", 00:30:15.607 "dma_device_type": 1 00:30:15.607 } 00:30:15.607 ], 00:30:15.607 "driver_specific": { 00:30:15.607 "nvme": [ 00:30:15.607 { 00:30:15.607 "trid": { 00:30:15.607 "trtype": "TCP", 00:30:15.607 "adrfam": "IPv4", 00:30:15.607 "traddr": "10.0.0.2", 00:30:15.607 "trsvcid": "4420", 00:30:15.607 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:15.607 }, 00:30:15.607 "ctrlr_data": { 00:30:15.607 "cntlid": 1, 00:30:15.607 "vendor_id": "0x8086", 00:30:15.607 "model_number": "SPDK bdev Controller", 00:30:15.607 "serial_number": "00000000000000000000", 00:30:15.607 "firmware_revision": "25.01", 00:30:15.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:15.607 "oacs": { 00:30:15.607 "security": 0, 00:30:15.607 "format": 0, 00:30:15.607 "firmware": 0, 00:30:15.607 "ns_manage": 0 00:30:15.607 }, 00:30:15.607 "multi_ctrlr": true, 00:30:15.607 "ana_reporting": false 00:30:15.607 }, 00:30:15.607 "vs": { 00:30:15.607 "nvme_version": "1.3" 00:30:15.607 }, 00:30:15.607 "ns_data": { 00:30:15.607 "id": 1, 00:30:15.607 "can_share": true 00:30:15.607 } 00:30:15.607 } 00:30:15.607 ], 00:30:15.607 "mp_policy": "active_passive" 00:30:15.607 } 00:30:15.607 } 00:30:15.607 ] 00:30:15.607 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.607 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:15.607 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.607 16:36:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.607 [2024-11-19 16:36:05.924610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:15.607 [2024-11-19 16:36:05.924712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x207b480 (9): Bad file descriptor 00:30:15.866 [2024-11-19 16:36:06.057223] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:15.866 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.866 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:15.866 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.866 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.866 [ 00:30:15.866 { 00:30:15.866 "name": "nvme0n1", 00:30:15.866 "aliases": [ 00:30:15.866 "77cbf1ea-c416-4537-8781-6368319d3047" 00:30:15.866 ], 00:30:15.866 "product_name": "NVMe disk", 00:30:15.866 "block_size": 512, 00:30:15.866 "num_blocks": 2097152, 00:30:15.866 "uuid": "77cbf1ea-c416-4537-8781-6368319d3047", 00:30:15.866 "numa_id": 0, 00:30:15.866 "assigned_rate_limits": { 00:30:15.866 "rw_ios_per_sec": 0, 00:30:15.866 "rw_mbytes_per_sec": 0, 00:30:15.866 "r_mbytes_per_sec": 0, 00:30:15.866 "w_mbytes_per_sec": 0 00:30:15.866 }, 00:30:15.866 "claimed": false, 00:30:15.866 "zoned": false, 00:30:15.866 "supported_io_types": { 00:30:15.866 "read": true, 00:30:15.866 "write": true, 00:30:15.866 "unmap": false, 00:30:15.866 "flush": true, 00:30:15.866 "reset": true, 00:30:15.866 "nvme_admin": true, 00:30:15.866 "nvme_io": true, 00:30:15.866 "nvme_io_md": false, 00:30:15.866 "write_zeroes": true, 00:30:15.866 "zcopy": false, 00:30:15.866 "get_zone_info": false, 00:30:15.866 "zone_management": false, 00:30:15.866 "zone_append": false, 00:30:15.866 "compare": true, 00:30:15.866 "compare_and_write": true, 00:30:15.866 "abort": true, 00:30:15.866 "seek_hole": false, 00:30:15.866 "seek_data": false, 00:30:15.866 "copy": true, 00:30:15.866 "nvme_iov_md": false 00:30:15.866 }, 00:30:15.866 "memory_domains": [ 00:30:15.866 { 00:30:15.866 "dma_device_id": "system", 00:30:15.866 "dma_device_type": 1 00:30:15.866 } 00:30:15.866 ], 00:30:15.866 "driver_specific": { 00:30:15.866 "nvme": [ 00:30:15.866 { 00:30:15.866 "trid": { 00:30:15.866 "trtype": "TCP", 00:30:15.866 "adrfam": "IPv4", 00:30:15.866 "traddr": "10.0.0.2", 00:30:15.866 "trsvcid": "4420", 00:30:15.866 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:15.866 }, 00:30:15.866 "ctrlr_data": { 00:30:15.866 "cntlid": 2, 00:30:15.866 "vendor_id": "0x8086", 00:30:15.866 "model_number": "SPDK bdev Controller", 00:30:15.866 "serial_number": "00000000000000000000", 00:30:15.866 "firmware_revision": "25.01", 00:30:15.866 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:15.866 "oacs": { 00:30:15.866 "security": 0, 00:30:15.866 "format": 0, 00:30:15.866 "firmware": 0, 00:30:15.866 "ns_manage": 0 00:30:15.866 }, 00:30:15.866 "multi_ctrlr": true, 00:30:15.866 "ana_reporting": false 00:30:15.866 }, 00:30:15.866 "vs": { 00:30:15.866 "nvme_version": "1.3" 00:30:15.866 }, 00:30:15.866 "ns_data": { 00:30:15.866 "id": 1, 00:30:15.866 "can_share": true 00:30:15.866 } 00:30:15.866 } 00:30:15.866 ], 00:30:15.866 "mp_policy": "active_passive" 00:30:15.866 } 00:30:15.866 } 00:30:15.867 ] 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.RSjsVazNZr 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.RSjsVazNZr 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.RSjsVazNZr 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.867 [2024-11-19 16:36:06.117263] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:15.867 [2024-11-19 16:36:06.117465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:15.867 [2024-11-19 16:36:06.133303] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:15.867 nvme0n1 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:15.867 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:16.126 [ 00:30:16.126 { 00:30:16.126 "name": "nvme0n1", 00:30:16.126 "aliases": [ 00:30:16.126 "77cbf1ea-c416-4537-8781-6368319d3047" 00:30:16.126 ], 00:30:16.126 "product_name": "NVMe disk", 00:30:16.126 "block_size": 512, 00:30:16.126 "num_blocks": 2097152, 00:30:16.126 "uuid": "77cbf1ea-c416-4537-8781-6368319d3047", 00:30:16.126 "numa_id": 0, 00:30:16.126 "assigned_rate_limits": { 00:30:16.126 "rw_ios_per_sec": 0, 00:30:16.126 "rw_mbytes_per_sec": 0, 00:30:16.126 "r_mbytes_per_sec": 0, 00:30:16.126 "w_mbytes_per_sec": 0 00:30:16.126 }, 00:30:16.126 "claimed": false, 00:30:16.126 "zoned": false, 00:30:16.126 "supported_io_types": { 00:30:16.126 "read": true, 00:30:16.126 "write": true, 00:30:16.126 "unmap": false, 00:30:16.126 "flush": true, 00:30:16.126 "reset": true, 00:30:16.126 "nvme_admin": true, 00:30:16.126 "nvme_io": true, 00:30:16.126 "nvme_io_md": false, 00:30:16.126 "write_zeroes": true, 00:30:16.126 "zcopy": false, 00:30:16.126 "get_zone_info": false, 00:30:16.126 "zone_management": false, 00:30:16.126 "zone_append": false, 00:30:16.126 "compare": true, 00:30:16.126 "compare_and_write": true, 00:30:16.126 "abort": true, 00:30:16.126 "seek_hole": false, 00:30:16.126 "seek_data": false, 00:30:16.126 "copy": true, 00:30:16.126 "nvme_iov_md": false 00:30:16.126 }, 00:30:16.126 "memory_domains": [ 00:30:16.126 { 00:30:16.126 "dma_device_id": "system", 00:30:16.126 "dma_device_type": 1 00:30:16.126 } 00:30:16.126 ], 00:30:16.126 "driver_specific": { 00:30:16.126 "nvme": [ 00:30:16.126 { 00:30:16.126 "trid": { 00:30:16.126 "trtype": "TCP", 00:30:16.126 "adrfam": "IPv4", 00:30:16.126 "traddr": "10.0.0.2", 00:30:16.126 "trsvcid": "4421", 00:30:16.126 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:16.126 }, 00:30:16.126 "ctrlr_data": { 00:30:16.126 "cntlid": 3, 00:30:16.126 "vendor_id": "0x8086", 00:30:16.126 "model_number": "SPDK bdev Controller", 00:30:16.126 "serial_number": "00000000000000000000", 00:30:16.126 "firmware_revision": "25.01", 00:30:16.126 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.126 "oacs": { 00:30:16.126 "security": 0, 00:30:16.126 "format": 0, 00:30:16.126 "firmware": 0, 00:30:16.126 "ns_manage": 0 00:30:16.126 }, 00:30:16.126 "multi_ctrlr": true, 00:30:16.126 "ana_reporting": false 00:30:16.126 }, 00:30:16.126 "vs": { 00:30:16.126 "nvme_version": "1.3" 00:30:16.126 }, 00:30:16.126 "ns_data": { 00:30:16.126 "id": 1, 00:30:16.126 "can_share": true 00:30:16.126 } 00:30:16.126 } 00:30:16.126 ], 00:30:16.126 "mp_policy": "active_passive" 00:30:16.126 } 00:30:16.126 } 00:30:16.126 ] 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.RSjsVazNZr 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:16.126 rmmod nvme_tcp 00:30:16.126 rmmod nvme_fabrics 00:30:16.126 rmmod nvme_keyring 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 337019 ']' 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 337019 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 337019 ']' 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 337019 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 337019 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 337019' 00:30:16.126 killing process with pid 337019 00:30:16.126 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 337019 00:30:16.127 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 337019 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.387 16:36:06 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.289 16:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:18.289 00:30:18.289 real 0m5.647s 00:30:18.289 user 0m2.148s 00:30:18.289 sys 0m1.918s 00:30:18.289 16:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.289 16:36:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:18.289 ************************************ 00:30:18.289 END TEST nvmf_async_init 00:30:18.289 ************************************ 00:30:18.290 16:36:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:18.290 16:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:18.290 16:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.290 16:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.290 ************************************ 00:30:18.290 START TEST dma 00:30:18.290 ************************************ 00:30:18.290 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:18.548 * Looking for test storage... 00:30:18.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:18.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.548 --rc genhtml_branch_coverage=1 00:30:18.548 --rc genhtml_function_coverage=1 00:30:18.548 --rc genhtml_legend=1 00:30:18.548 --rc geninfo_all_blocks=1 00:30:18.548 --rc geninfo_unexecuted_blocks=1 00:30:18.548 00:30:18.548 ' 00:30:18.548 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.549 --rc genhtml_branch_coverage=1 00:30:18.549 --rc genhtml_function_coverage=1 00:30:18.549 --rc genhtml_legend=1 00:30:18.549 --rc geninfo_all_blocks=1 00:30:18.549 --rc geninfo_unexecuted_blocks=1 00:30:18.549 00:30:18.549 ' 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.549 --rc genhtml_branch_coverage=1 00:30:18.549 --rc genhtml_function_coverage=1 00:30:18.549 --rc genhtml_legend=1 00:30:18.549 --rc geninfo_all_blocks=1 00:30:18.549 --rc geninfo_unexecuted_blocks=1 00:30:18.549 00:30:18.549 ' 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:18.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.549 --rc genhtml_branch_coverage=1 00:30:18.549 --rc genhtml_function_coverage=1 00:30:18.549 --rc genhtml_legend=1 00:30:18.549 --rc geninfo_all_blocks=1 00:30:18.549 --rc geninfo_unexecuted_blocks=1 00:30:18.549 00:30:18.549 ' 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:18.549 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:18.549 00:30:18.549 real 0m0.167s 00:30:18.549 user 0m0.102s 00:30:18.549 sys 0m0.074s 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:18.549 ************************************ 00:30:18.549 END TEST dma 00:30:18.549 ************************************ 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:18.549 ************************************ 00:30:18.549 START TEST nvmf_identify 00:30:18.549 ************************************ 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:18.549 * Looking for test storage... 00:30:18.549 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:30:18.549 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:18.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.808 --rc genhtml_branch_coverage=1 00:30:18.808 --rc genhtml_function_coverage=1 00:30:18.808 --rc genhtml_legend=1 00:30:18.808 --rc geninfo_all_blocks=1 00:30:18.808 --rc geninfo_unexecuted_blocks=1 00:30:18.808 00:30:18.808 ' 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:18.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.808 --rc genhtml_branch_coverage=1 00:30:18.808 --rc genhtml_function_coverage=1 00:30:18.808 --rc genhtml_legend=1 00:30:18.808 --rc geninfo_all_blocks=1 00:30:18.808 --rc geninfo_unexecuted_blocks=1 00:30:18.808 00:30:18.808 ' 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:18.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.808 --rc genhtml_branch_coverage=1 00:30:18.808 --rc genhtml_function_coverage=1 00:30:18.808 --rc genhtml_legend=1 00:30:18.808 --rc geninfo_all_blocks=1 00:30:18.808 --rc geninfo_unexecuted_blocks=1 00:30:18.808 00:30:18.808 ' 00:30:18.808 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:18.808 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:18.808 --rc genhtml_branch_coverage=1 00:30:18.808 --rc genhtml_function_coverage=1 00:30:18.808 --rc genhtml_legend=1 00:30:18.808 --rc geninfo_all_blocks=1 00:30:18.808 --rc geninfo_unexecuted_blocks=1 00:30:18.808 00:30:18.808 ' 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:18.809 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:18.809 16:36:08 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:20.712 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:20.712 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:20.712 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:20.712 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:20.712 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:20.713 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:20.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:30:20.971 00:30:20.971 --- 10.0.0.2 ping statistics --- 00:30:20.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.971 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:20.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:30:20.971 00:30:20.971 --- 10.0.0.1 ping statistics --- 00:30:20.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.971 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=339698 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 339698 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 339698 ']' 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.971 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:20.971 [2024-11-19 16:36:11.271444] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:30:20.971 [2024-11-19 16:36:11.271529] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.229 [2024-11-19 16:36:11.346335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:21.229 [2024-11-19 16:36:11.393271] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.229 [2024-11-19 16:36:11.393324] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.230 [2024-11-19 16:36:11.393353] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.230 [2024-11-19 16:36:11.393372] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.230 [2024-11-19 16:36:11.393382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.230 [2024-11-19 16:36:11.398090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.230 [2024-11-19 16:36:11.398158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:21.230 [2024-11-19 16:36:11.398209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:21.230 [2024-11-19 16:36:11.398213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.230 [2024-11-19 16:36:11.513631] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.230 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.491 Malloc0 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.491 [2024-11-19 16:36:11.597556] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.491 [ 00:30:21.491 { 00:30:21.491 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:21.491 "subtype": "Discovery", 00:30:21.491 "listen_addresses": [ 00:30:21.491 { 00:30:21.491 "trtype": "TCP", 00:30:21.491 "adrfam": "IPv4", 00:30:21.491 "traddr": "10.0.0.2", 00:30:21.491 "trsvcid": "4420" 00:30:21.491 } 00:30:21.491 ], 00:30:21.491 "allow_any_host": true, 00:30:21.491 "hosts": [] 00:30:21.491 }, 00:30:21.491 { 00:30:21.491 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:21.491 "subtype": "NVMe", 00:30:21.491 "listen_addresses": [ 00:30:21.491 { 00:30:21.491 "trtype": "TCP", 00:30:21.491 "adrfam": "IPv4", 00:30:21.491 "traddr": "10.0.0.2", 00:30:21.491 "trsvcid": "4420" 00:30:21.491 } 00:30:21.491 ], 00:30:21.491 "allow_any_host": true, 00:30:21.491 "hosts": [], 00:30:21.491 "serial_number": "SPDK00000000000001", 00:30:21.491 "model_number": "SPDK bdev Controller", 00:30:21.491 "max_namespaces": 32, 00:30:21.491 "min_cntlid": 1, 00:30:21.491 "max_cntlid": 65519, 00:30:21.491 "namespaces": [ 00:30:21.491 { 00:30:21.491 "nsid": 1, 00:30:21.491 "bdev_name": "Malloc0", 00:30:21.491 "name": "Malloc0", 00:30:21.491 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:21.491 "eui64": "ABCDEF0123456789", 00:30:21.491 "uuid": "d59b8961-67da-40e2-96a1-6885006f6616" 00:30:21.491 } 00:30:21.491 ] 00:30:21.491 } 00:30:21.491 ] 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.491 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:21.491 [2024-11-19 16:36:11.635022] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:30:21.491 [2024-11-19 16:36:11.635082] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339726 ] 00:30:21.491 [2024-11-19 16:36:11.683028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:21.491 [2024-11-19 16:36:11.683113] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:21.491 [2024-11-19 16:36:11.683126] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:21.491 [2024-11-19 16:36:11.683144] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:21.491 [2024-11-19 16:36:11.683159] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:21.491 [2024-11-19 16:36:11.687490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:21.491 [2024-11-19 16:36:11.687560] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22d4650 0 00:30:21.491 [2024-11-19 16:36:11.694098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:21.492 [2024-11-19 16:36:11.694120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:21.492 [2024-11-19 16:36:11.694130] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:21.492 [2024-11-19 16:36:11.694137] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:21.492 [2024-11-19 16:36:11.694178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.694190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.694198] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d4650) 00:30:21.492 [2024-11-19 16:36:11.694216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:21.492 [2024-11-19 16:36:11.694243] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232ef40, cid 0, qid 0 00:30:21.492 [2024-11-19 16:36:11.701082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.492 [2024-11-19 16:36:11.701101] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.492 [2024-11-19 16:36:11.701109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701116] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232ef40) on tqpair=0x22d4650 00:30:21.492 [2024-11-19 16:36:11.701137] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:21.492 [2024-11-19 16:36:11.701149] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:21.492 [2024-11-19 16:36:11.701159] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:21.492 [2024-11-19 16:36:11.701181] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d4650) 00:30:21.492 [2024-11-19 16:36:11.701208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.492 [2024-11-19 16:36:11.701233] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232ef40, cid 0, qid 0 00:30:21.492 [2024-11-19 16:36:11.701370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.492 [2024-11-19 16:36:11.701390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.492 [2024-11-19 16:36:11.701398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232ef40) on tqpair=0x22d4650 00:30:21.492 [2024-11-19 16:36:11.701414] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:21.492 [2024-11-19 16:36:11.701427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:21.492 [2024-11-19 16:36:11.701440] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701448] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701455] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d4650) 00:30:21.492 [2024-11-19 16:36:11.701466] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.492 [2024-11-19 16:36:11.701488] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232ef40, cid 0, qid 0 00:30:21.492 [2024-11-19 16:36:11.701568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.492 [2024-11-19 16:36:11.701582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.492 [2024-11-19 16:36:11.701589] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701596] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232ef40) on tqpair=0x22d4650 00:30:21.492 [2024-11-19 16:36:11.701606] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:21.492 [2024-11-19 16:36:11.701620] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:21.492 [2024-11-19 16:36:11.701632] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d4650) 00:30:21.492 [2024-11-19 16:36:11.701657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.492 [2024-11-19 16:36:11.701679] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232ef40, cid 0, qid 0 00:30:21.492 [2024-11-19 16:36:11.701770] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.492 [2024-11-19 16:36:11.701783] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.492 [2024-11-19 16:36:11.701790] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701797] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232ef40) on tqpair=0x22d4650 00:30:21.492 [2024-11-19 16:36:11.701806] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:21.492 [2024-11-19 16:36:11.701823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701832] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701838] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d4650) 00:30:21.492 [2024-11-19 16:36:11.701849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.492 [2024-11-19 16:36:11.701871] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232ef40, cid 0, qid 0 00:30:21.492 [2024-11-19 16:36:11.701970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.492 [2024-11-19 16:36:11.701984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.492 [2024-11-19 16:36:11.701991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.701998] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232ef40) on tqpair=0x22d4650 00:30:21.492 [2024-11-19 16:36:11.702011] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:21.492 [2024-11-19 16:36:11.702020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:21.492 [2024-11-19 16:36:11.702034] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:21.492 [2024-11-19 16:36:11.702144] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:21.492 [2024-11-19 16:36:11.702155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:21.492 [2024-11-19 16:36:11.702169] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.702177] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.702183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d4650) 00:30:21.492 [2024-11-19 16:36:11.702194] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.492 [2024-11-19 16:36:11.702217] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232ef40, cid 0, qid 0 00:30:21.492 [2024-11-19 16:36:11.702337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.492 [2024-11-19 16:36:11.702351] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.492 [2024-11-19 16:36:11.702357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.702364] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232ef40) on tqpair=0x22d4650 00:30:21.492 [2024-11-19 16:36:11.702373] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:21.492 [2024-11-19 16:36:11.702389] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.702399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.702405] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d4650) 00:30:21.492 [2024-11-19 16:36:11.702416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.492 [2024-11-19 16:36:11.702437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232ef40, cid 0, qid 0 00:30:21.492 [2024-11-19 16:36:11.702515] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.492 [2024-11-19 16:36:11.702529] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.492 [2024-11-19 16:36:11.702536] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.492 [2024-11-19 16:36:11.702542] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232ef40) on tqpair=0x22d4650 00:30:21.493 [2024-11-19 16:36:11.702550] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:21.493 [2024-11-19 16:36:11.702559] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:21.493 [2024-11-19 16:36:11.702572] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:21.493 [2024-11-19 16:36:11.702593] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:21.493 [2024-11-19 16:36:11.702609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.702617] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d4650) 00:30:21.493 [2024-11-19 16:36:11.702635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.493 [2024-11-19 16:36:11.702658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232ef40, cid 0, qid 0 00:30:21.493 [2024-11-19 16:36:11.702775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.493 [2024-11-19 16:36:11.702790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.493 [2024-11-19 16:36:11.702797] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.702804] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d4650): datao=0, datal=4096, cccid=0 00:30:21.493 [2024-11-19 16:36:11.702812] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x232ef40) on tqpair(0x22d4650): expected_datao=0, payload_size=4096 00:30:21.493 [2024-11-19 16:36:11.702819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.702830] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.702839] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.702861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.493 [2024-11-19 16:36:11.702872] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.493 [2024-11-19 16:36:11.702879] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.702886] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232ef40) on tqpair=0x22d4650 00:30:21.493 [2024-11-19 16:36:11.702897] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:21.493 [2024-11-19 16:36:11.702906] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:21.493 [2024-11-19 16:36:11.702913] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:21.493 [2024-11-19 16:36:11.702927] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:21.493 [2024-11-19 16:36:11.702936] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:21.493 [2024-11-19 16:36:11.702944] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:21.493 [2024-11-19 16:36:11.702962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:21.493 [2024-11-19 16:36:11.702976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.702983] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.702990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d4650) 00:30:21.493 [2024-11-19 16:36:11.703001] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:21.493 [2024-11-19 16:36:11.703023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232ef40, cid 0, qid 0 00:30:21.493 [2024-11-19 16:36:11.703151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.493 [2024-11-19 16:36:11.703166] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.493 [2024-11-19 16:36:11.703173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232ef40) on tqpair=0x22d4650 00:30:21.493 [2024-11-19 16:36:11.703191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22d4650) 00:30:21.493 [2024-11-19 16:36:11.703216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.493 [2024-11-19 16:36:11.703231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703245] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22d4650) 00:30:21.493 [2024-11-19 16:36:11.703254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.493 [2024-11-19 16:36:11.703264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703278] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22d4650) 00:30:21.493 [2024-11-19 16:36:11.703287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.493 [2024-11-19 16:36:11.703297] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.493 [2024-11-19 16:36:11.703334] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.493 [2024-11-19 16:36:11.703343] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:21.493 [2024-11-19 16:36:11.703358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:21.493 [2024-11-19 16:36:11.703384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703392] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d4650) 00:30:21.493 [2024-11-19 16:36:11.703402] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.493 [2024-11-19 16:36:11.703424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232ef40, cid 0, qid 0 00:30:21.493 [2024-11-19 16:36:11.703449] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f0c0, cid 1, qid 0 00:30:21.493 [2024-11-19 16:36:11.703458] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f240, cid 2, qid 0 00:30:21.493 [2024-11-19 16:36:11.703465] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.493 [2024-11-19 16:36:11.703473] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f540, cid 4, qid 0 00:30:21.493 [2024-11-19 16:36:11.703700] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.493 [2024-11-19 16:36:11.703715] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.493 [2024-11-19 16:36:11.703722] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703728] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f540) on tqpair=0x22d4650 00:30:21.493 [2024-11-19 16:36:11.703742] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:21.493 [2024-11-19 16:36:11.703752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:21.493 [2024-11-19 16:36:11.703771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.703780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d4650) 00:30:21.493 [2024-11-19 16:36:11.703791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.493 [2024-11-19 16:36:11.703828] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f540, cid 4, qid 0 00:30:21.493 [2024-11-19 16:36:11.703982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.493 [2024-11-19 16:36:11.703997] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.493 [2024-11-19 16:36:11.704004] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.704010] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d4650): datao=0, datal=4096, cccid=4 00:30:21.493 [2024-11-19 16:36:11.704018] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x232f540) on tqpair(0x22d4650): expected_datao=0, payload_size=4096 00:30:21.493 [2024-11-19 16:36:11.704025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.704042] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.704051] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.704079] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.493 [2024-11-19 16:36:11.704093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.493 [2024-11-19 16:36:11.704100] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.493 [2024-11-19 16:36:11.704106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f540) on tqpair=0x22d4650 00:30:21.493 [2024-11-19 16:36:11.704125] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:21.493 [2024-11-19 16:36:11.704161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.704171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d4650) 00:30:21.494 [2024-11-19 16:36:11.704182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.494 [2024-11-19 16:36:11.704194] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.704201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.704208] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22d4650) 00:30:21.494 [2024-11-19 16:36:11.704217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.494 [2024-11-19 16:36:11.704244] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f540, cid 4, qid 0 00:30:21.494 [2024-11-19 16:36:11.704257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f6c0, cid 5, qid 0 00:30:21.494 [2024-11-19 16:36:11.704395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.494 [2024-11-19 16:36:11.704409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.494 [2024-11-19 16:36:11.704416] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.704423] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d4650): datao=0, datal=1024, cccid=4 00:30:21.494 [2024-11-19 16:36:11.704430] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x232f540) on tqpair(0x22d4650): expected_datao=0, payload_size=1024 00:30:21.494 [2024-11-19 16:36:11.704438] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.704448] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.704455] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.704464] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.494 [2024-11-19 16:36:11.704472] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.494 [2024-11-19 16:36:11.704479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.704485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f6c0) on tqpair=0x22d4650 00:30:21.494 [2024-11-19 16:36:11.745216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.494 [2024-11-19 16:36:11.745235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.494 [2024-11-19 16:36:11.745243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745254] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f540) on tqpair=0x22d4650 00:30:21.494 [2024-11-19 16:36:11.745272] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d4650) 00:30:21.494 [2024-11-19 16:36:11.745294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.494 [2024-11-19 16:36:11.745325] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f540, cid 4, qid 0 00:30:21.494 [2024-11-19 16:36:11.745425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.494 [2024-11-19 16:36:11.745439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.494 [2024-11-19 16:36:11.745446] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745453] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d4650): datao=0, datal=3072, cccid=4 00:30:21.494 [2024-11-19 16:36:11.745461] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x232f540) on tqpair(0x22d4650): expected_datao=0, payload_size=3072 00:30:21.494 [2024-11-19 16:36:11.745468] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745479] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745486] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.494 [2024-11-19 16:36:11.745508] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.494 [2024-11-19 16:36:11.745515] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f540) on tqpair=0x22d4650 00:30:21.494 [2024-11-19 16:36:11.745536] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745545] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22d4650) 00:30:21.494 [2024-11-19 16:36:11.745556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.494 [2024-11-19 16:36:11.745584] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f540, cid 4, qid 0 00:30:21.494 [2024-11-19 16:36:11.745682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.494 [2024-11-19 16:36:11.745695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.494 [2024-11-19 16:36:11.745702] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745708] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22d4650): datao=0, datal=8, cccid=4 00:30:21.494 [2024-11-19 16:36:11.745716] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x232f540) on tqpair(0x22d4650): expected_datao=0, payload_size=8 00:30:21.494 [2024-11-19 16:36:11.745723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745733] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.745740] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.791088] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.494 [2024-11-19 16:36:11.791107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.494 [2024-11-19 16:36:11.791130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.494 [2024-11-19 16:36:11.791138] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f540) on tqpair=0x22d4650 00:30:21.494 ===================================================== 00:30:21.494 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:21.494 ===================================================== 00:30:21.494 Controller Capabilities/Features 00:30:21.494 ================================ 00:30:21.494 Vendor ID: 0000 00:30:21.494 Subsystem Vendor ID: 0000 00:30:21.494 Serial Number: .................... 00:30:21.494 Model Number: ........................................ 00:30:21.494 Firmware Version: 25.01 00:30:21.494 Recommended Arb Burst: 0 00:30:21.494 IEEE OUI Identifier: 00 00 00 00:30:21.494 Multi-path I/O 00:30:21.494 May have multiple subsystem ports: No 00:30:21.494 May have multiple controllers: No 00:30:21.494 Associated with SR-IOV VF: No 00:30:21.494 Max Data Transfer Size: 131072 00:30:21.494 Max Number of Namespaces: 0 00:30:21.494 Max Number of I/O Queues: 1024 00:30:21.494 NVMe Specification Version (VS): 1.3 00:30:21.494 NVMe Specification Version (Identify): 1.3 00:30:21.494 Maximum Queue Entries: 128 00:30:21.494 Contiguous Queues Required: Yes 00:30:21.494 Arbitration Mechanisms Supported 00:30:21.494 Weighted Round Robin: Not Supported 00:30:21.494 Vendor Specific: Not Supported 00:30:21.494 Reset Timeout: 15000 ms 00:30:21.494 Doorbell Stride: 4 bytes 00:30:21.494 NVM Subsystem Reset: Not Supported 00:30:21.494 Command Sets Supported 00:30:21.494 NVM Command Set: Supported 00:30:21.494 Boot Partition: Not Supported 00:30:21.494 Memory Page Size Minimum: 4096 bytes 00:30:21.494 Memory Page Size Maximum: 4096 bytes 00:30:21.494 Persistent Memory Region: Not Supported 00:30:21.494 Optional Asynchronous Events Supported 00:30:21.494 Namespace Attribute Notices: Not Supported 00:30:21.494 Firmware Activation Notices: Not Supported 00:30:21.494 ANA Change Notices: Not Supported 00:30:21.494 PLE Aggregate Log Change Notices: Not Supported 00:30:21.494 LBA Status Info Alert Notices: Not Supported 00:30:21.494 EGE Aggregate Log Change Notices: Not Supported 00:30:21.494 Normal NVM Subsystem Shutdown event: Not Supported 00:30:21.494 Zone Descriptor Change Notices: Not Supported 00:30:21.494 Discovery Log Change Notices: Supported 00:30:21.494 Controller Attributes 00:30:21.494 128-bit Host Identifier: Not Supported 00:30:21.494 Non-Operational Permissive Mode: Not Supported 00:30:21.494 NVM Sets: Not Supported 00:30:21.494 Read Recovery Levels: Not Supported 00:30:21.494 Endurance Groups: Not Supported 00:30:21.494 Predictable Latency Mode: Not Supported 00:30:21.494 Traffic Based Keep ALive: Not Supported 00:30:21.494 Namespace Granularity: Not Supported 00:30:21.494 SQ Associations: Not Supported 00:30:21.494 UUID List: Not Supported 00:30:21.494 Multi-Domain Subsystem: Not Supported 00:30:21.494 Fixed Capacity Management: Not Supported 00:30:21.494 Variable Capacity Management: Not Supported 00:30:21.494 Delete Endurance Group: Not Supported 00:30:21.494 Delete NVM Set: Not Supported 00:30:21.495 Extended LBA Formats Supported: Not Supported 00:30:21.495 Flexible Data Placement Supported: Not Supported 00:30:21.495 00:30:21.495 Controller Memory Buffer Support 00:30:21.495 ================================ 00:30:21.495 Supported: No 00:30:21.495 00:30:21.495 Persistent Memory Region Support 00:30:21.495 ================================ 00:30:21.495 Supported: No 00:30:21.495 00:30:21.495 Admin Command Set Attributes 00:30:21.495 ============================ 00:30:21.495 Security Send/Receive: Not Supported 00:30:21.495 Format NVM: Not Supported 00:30:21.495 Firmware Activate/Download: Not Supported 00:30:21.495 Namespace Management: Not Supported 00:30:21.495 Device Self-Test: Not Supported 00:30:21.495 Directives: Not Supported 00:30:21.495 NVMe-MI: Not Supported 00:30:21.495 Virtualization Management: Not Supported 00:30:21.495 Doorbell Buffer Config: Not Supported 00:30:21.495 Get LBA Status Capability: Not Supported 00:30:21.495 Command & Feature Lockdown Capability: Not Supported 00:30:21.495 Abort Command Limit: 1 00:30:21.495 Async Event Request Limit: 4 00:30:21.495 Number of Firmware Slots: N/A 00:30:21.495 Firmware Slot 1 Read-Only: N/A 00:30:21.495 Firmware Activation Without Reset: N/A 00:30:21.495 Multiple Update Detection Support: N/A 00:30:21.495 Firmware Update Granularity: No Information Provided 00:30:21.495 Per-Namespace SMART Log: No 00:30:21.495 Asymmetric Namespace Access Log Page: Not Supported 00:30:21.495 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:21.495 Command Effects Log Page: Not Supported 00:30:21.495 Get Log Page Extended Data: Supported 00:30:21.495 Telemetry Log Pages: Not Supported 00:30:21.495 Persistent Event Log Pages: Not Supported 00:30:21.495 Supported Log Pages Log Page: May Support 00:30:21.495 Commands Supported & Effects Log Page: Not Supported 00:30:21.495 Feature Identifiers & Effects Log Page:May Support 00:30:21.495 NVMe-MI Commands & Effects Log Page: May Support 00:30:21.495 Data Area 4 for Telemetry Log: Not Supported 00:30:21.495 Error Log Page Entries Supported: 128 00:30:21.495 Keep Alive: Not Supported 00:30:21.495 00:30:21.495 NVM Command Set Attributes 00:30:21.495 ========================== 00:30:21.495 Submission Queue Entry Size 00:30:21.495 Max: 1 00:30:21.495 Min: 1 00:30:21.495 Completion Queue Entry Size 00:30:21.495 Max: 1 00:30:21.495 Min: 1 00:30:21.495 Number of Namespaces: 0 00:30:21.495 Compare Command: Not Supported 00:30:21.495 Write Uncorrectable Command: Not Supported 00:30:21.495 Dataset Management Command: Not Supported 00:30:21.495 Write Zeroes Command: Not Supported 00:30:21.495 Set Features Save Field: Not Supported 00:30:21.495 Reservations: Not Supported 00:30:21.495 Timestamp: Not Supported 00:30:21.495 Copy: Not Supported 00:30:21.495 Volatile Write Cache: Not Present 00:30:21.495 Atomic Write Unit (Normal): 1 00:30:21.495 Atomic Write Unit (PFail): 1 00:30:21.495 Atomic Compare & Write Unit: 1 00:30:21.495 Fused Compare & Write: Supported 00:30:21.495 Scatter-Gather List 00:30:21.495 SGL Command Set: Supported 00:30:21.495 SGL Keyed: Supported 00:30:21.495 SGL Bit Bucket Descriptor: Not Supported 00:30:21.495 SGL Metadata Pointer: Not Supported 00:30:21.495 Oversized SGL: Not Supported 00:30:21.495 SGL Metadata Address: Not Supported 00:30:21.495 SGL Offset: Supported 00:30:21.495 Transport SGL Data Block: Not Supported 00:30:21.495 Replay Protected Memory Block: Not Supported 00:30:21.495 00:30:21.495 Firmware Slot Information 00:30:21.495 ========================= 00:30:21.495 Active slot: 0 00:30:21.495 00:30:21.495 00:30:21.495 Error Log 00:30:21.495 ========= 00:30:21.495 00:30:21.495 Active Namespaces 00:30:21.495 ================= 00:30:21.495 Discovery Log Page 00:30:21.495 ================== 00:30:21.495 Generation Counter: 2 00:30:21.495 Number of Records: 2 00:30:21.495 Record Format: 0 00:30:21.495 00:30:21.495 Discovery Log Entry 0 00:30:21.495 ---------------------- 00:30:21.495 Transport Type: 3 (TCP) 00:30:21.495 Address Family: 1 (IPv4) 00:30:21.495 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:21.495 Entry Flags: 00:30:21.495 Duplicate Returned Information: 1 00:30:21.495 Explicit Persistent Connection Support for Discovery: 1 00:30:21.495 Transport Requirements: 00:30:21.495 Secure Channel: Not Required 00:30:21.495 Port ID: 0 (0x0000) 00:30:21.495 Controller ID: 65535 (0xffff) 00:30:21.495 Admin Max SQ Size: 128 00:30:21.495 Transport Service Identifier: 4420 00:30:21.495 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:21.495 Transport Address: 10.0.0.2 00:30:21.495 Discovery Log Entry 1 00:30:21.495 ---------------------- 00:30:21.495 Transport Type: 3 (TCP) 00:30:21.495 Address Family: 1 (IPv4) 00:30:21.495 Subsystem Type: 2 (NVM Subsystem) 00:30:21.495 Entry Flags: 00:30:21.495 Duplicate Returned Information: 0 00:30:21.495 Explicit Persistent Connection Support for Discovery: 0 00:30:21.495 Transport Requirements: 00:30:21.495 Secure Channel: Not Required 00:30:21.495 Port ID: 0 (0x0000) 00:30:21.495 Controller ID: 65535 (0xffff) 00:30:21.495 Admin Max SQ Size: 128 00:30:21.495 Transport Service Identifier: 4420 00:30:21.495 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:21.495 Transport Address: 10.0.0.2 [2024-11-19 16:36:11.791252] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:21.495 [2024-11-19 16:36:11.791275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232ef40) on tqpair=0x22d4650 00:30:21.495 [2024-11-19 16:36:11.791288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.495 [2024-11-19 16:36:11.791301] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f0c0) on tqpair=0x22d4650 00:30:21.495 [2024-11-19 16:36:11.791310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.495 [2024-11-19 16:36:11.791319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f240) on tqpair=0x22d4650 00:30:21.496 [2024-11-19 16:36:11.791327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.496 [2024-11-19 16:36:11.791335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.496 [2024-11-19 16:36:11.791343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.496 [2024-11-19 16:36:11.791363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.791373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.791380] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.496 [2024-11-19 16:36:11.791391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.496 [2024-11-19 16:36:11.791417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.496 [2024-11-19 16:36:11.791597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.496 [2024-11-19 16:36:11.791613] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.496 [2024-11-19 16:36:11.791620] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.791627] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.496 [2024-11-19 16:36:11.791639] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.791647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.791653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.496 [2024-11-19 16:36:11.791665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.496 [2024-11-19 16:36:11.791693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.496 [2024-11-19 16:36:11.791783] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.496 [2024-11-19 16:36:11.791798] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.496 [2024-11-19 16:36:11.791805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.791812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.496 [2024-11-19 16:36:11.791820] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:21.496 [2024-11-19 16:36:11.791829] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:21.496 [2024-11-19 16:36:11.791845] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.791854] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.791861] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.496 [2024-11-19 16:36:11.791872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.496 [2024-11-19 16:36:11.791893] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.496 [2024-11-19 16:36:11.791970] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.496 [2024-11-19 16:36:11.791984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.496 [2024-11-19 16:36:11.791991] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.496 [2024-11-19 16:36:11.792020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792036] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.496 [2024-11-19 16:36:11.792047] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.496 [2024-11-19 16:36:11.792077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.496 [2024-11-19 16:36:11.792145] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.496 [2024-11-19 16:36:11.792158] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.496 [2024-11-19 16:36:11.792166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.496 [2024-11-19 16:36:11.792189] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792199] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.496 [2024-11-19 16:36:11.792216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.496 [2024-11-19 16:36:11.792238] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.496 [2024-11-19 16:36:11.792314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.496 [2024-11-19 16:36:11.792328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.496 [2024-11-19 16:36:11.792335] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792342] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.496 [2024-11-19 16:36:11.792359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792376] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.496 [2024-11-19 16:36:11.792386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.496 [2024-11-19 16:36:11.792408] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.496 [2024-11-19 16:36:11.792498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.496 [2024-11-19 16:36:11.792510] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.496 [2024-11-19 16:36:11.792517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.496 [2024-11-19 16:36:11.792540] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792565] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792571] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.496 [2024-11-19 16:36:11.792582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.496 [2024-11-19 16:36:11.792604] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.496 [2024-11-19 16:36:11.792683] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.496 [2024-11-19 16:36:11.792697] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.496 [2024-11-19 16:36:11.792704] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792710] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.496 [2024-11-19 16:36:11.792730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792748] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.496 [2024-11-19 16:36:11.792758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.496 [2024-11-19 16:36:11.792780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.496 [2024-11-19 16:36:11.792857] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.496 [2024-11-19 16:36:11.792871] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.496 [2024-11-19 16:36:11.792878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.496 [2024-11-19 16:36:11.792901] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792911] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.496 [2024-11-19 16:36:11.792918] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.496 [2024-11-19 16:36:11.792928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.496 [2024-11-19 16:36:11.792950] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.496 [2024-11-19 16:36:11.793025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.496 [2024-11-19 16:36:11.793037] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.497 [2024-11-19 16:36:11.793044] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793051] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.497 [2024-11-19 16:36:11.793067] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793095] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.497 [2024-11-19 16:36:11.793105] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.497 [2024-11-19 16:36:11.793128] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.497 [2024-11-19 16:36:11.793204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.497 [2024-11-19 16:36:11.793217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.497 [2024-11-19 16:36:11.793224] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.497 [2024-11-19 16:36:11.793248] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793257] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793264] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.497 [2024-11-19 16:36:11.793275] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.497 [2024-11-19 16:36:11.793297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.497 [2024-11-19 16:36:11.793382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.497 [2024-11-19 16:36:11.793396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.497 [2024-11-19 16:36:11.793403] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.497 [2024-11-19 16:36:11.793427] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793448] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.497 [2024-11-19 16:36:11.793458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.497 [2024-11-19 16:36:11.793480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.497 [2024-11-19 16:36:11.793556] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.497 [2024-11-19 16:36:11.793570] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.497 [2024-11-19 16:36:11.793577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793583] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.497 [2024-11-19 16:36:11.793600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793610] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.497 [2024-11-19 16:36:11.793627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.497 [2024-11-19 16:36:11.793648] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.497 [2024-11-19 16:36:11.793724] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.497 [2024-11-19 16:36:11.793738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.497 [2024-11-19 16:36:11.793745] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793752] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.497 [2024-11-19 16:36:11.793768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793778] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793785] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.497 [2024-11-19 16:36:11.793795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.497 [2024-11-19 16:36:11.793817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.497 [2024-11-19 16:36:11.793893] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.497 [2024-11-19 16:36:11.793908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.497 [2024-11-19 16:36:11.793915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.497 [2024-11-19 16:36:11.793938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.793954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.497 [2024-11-19 16:36:11.793965] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.497 [2024-11-19 16:36:11.793987] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.497 [2024-11-19 16:36:11.794081] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.497 [2024-11-19 16:36:11.794096] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.497 [2024-11-19 16:36:11.794103] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794110] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.497 [2024-11-19 16:36:11.794127] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.497 [2024-11-19 16:36:11.794159] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.497 [2024-11-19 16:36:11.794181] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.497 [2024-11-19 16:36:11.794260] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.497 [2024-11-19 16:36:11.794274] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.497 [2024-11-19 16:36:11.794281] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794288] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.497 [2024-11-19 16:36:11.794304] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794321] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.497 [2024-11-19 16:36:11.794332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.497 [2024-11-19 16:36:11.794364] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.497 [2024-11-19 16:36:11.794456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.497 [2024-11-19 16:36:11.794470] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.497 [2024-11-19 16:36:11.794477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794484] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.497 [2024-11-19 16:36:11.794500] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794510] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794516] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.497 [2024-11-19 16:36:11.794527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.497 [2024-11-19 16:36:11.794549] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.497 [2024-11-19 16:36:11.794641] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.497 [2024-11-19 16:36:11.794653] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.497 [2024-11-19 16:36:11.794660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794667] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.497 [2024-11-19 16:36:11.794683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.497 [2024-11-19 16:36:11.794699] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.497 [2024-11-19 16:36:11.794710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.498 [2024-11-19 16:36:11.794731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.498 [2024-11-19 16:36:11.794804] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.498 [2024-11-19 16:36:11.794818] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.498 [2024-11-19 16:36:11.794824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.498 [2024-11-19 16:36:11.794831] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.498 [2024-11-19 16:36:11.794848] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.498 [2024-11-19 16:36:11.794857] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.498 [2024-11-19 16:36:11.794864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.498 [2024-11-19 16:36:11.794879] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.498 [2024-11-19 16:36:11.794901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.498 [2024-11-19 16:36:11.794976] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.498 [2024-11-19 16:36:11.794988] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.498 [2024-11-19 16:36:11.794995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.498 [2024-11-19 16:36:11.795002] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.498 [2024-11-19 16:36:11.795018] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.498 [2024-11-19 16:36:11.795028] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.498 [2024-11-19 16:36:11.795035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.498 [2024-11-19 16:36:11.795045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.498 [2024-11-19 16:36:11.799077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.498 [2024-11-19 16:36:11.799100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.498 [2024-11-19 16:36:11.799111] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.498 [2024-11-19 16:36:11.799118] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.498 [2024-11-19 16:36:11.799125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.498 [2024-11-19 16:36:11.799143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.498 [2024-11-19 16:36:11.799154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.498 [2024-11-19 16:36:11.799160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22d4650) 00:30:21.498 [2024-11-19 16:36:11.799171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.498 [2024-11-19 16:36:11.799195] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x232f3c0, cid 3, qid 0 00:30:21.498 [2024-11-19 16:36:11.799292] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.498 [2024-11-19 16:36:11.799305] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.498 [2024-11-19 16:36:11.799313] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.498 [2024-11-19 16:36:11.799319] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x232f3c0) on tqpair=0x22d4650 00:30:21.498 [2024-11-19 16:36:11.799333] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:30:21.498 00:30:21.498 16:36:11 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:21.760 [2024-11-19 16:36:11.832166] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:30:21.760 [2024-11-19 16:36:11.832209] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339730 ] 00:30:21.760 [2024-11-19 16:36:11.882590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:21.760 [2024-11-19 16:36:11.882646] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:21.760 [2024-11-19 16:36:11.882659] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:21.760 [2024-11-19 16:36:11.882673] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:21.760 [2024-11-19 16:36:11.882686] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:21.760 [2024-11-19 16:36:11.883115] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:21.760 [2024-11-19 16:36:11.883170] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16d5650 0 00:30:21.760 [2024-11-19 16:36:11.889085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:21.760 [2024-11-19 16:36:11.889106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:21.760 [2024-11-19 16:36:11.889114] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:21.760 [2024-11-19 16:36:11.889120] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:21.760 [2024-11-19 16:36:11.889166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.889178] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.889184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d5650) 00:30:21.760 [2024-11-19 16:36:11.889199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:21.760 [2024-11-19 16:36:11.889226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ff40, cid 0, qid 0 00:30:21.760 [2024-11-19 16:36:11.897102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.760 [2024-11-19 16:36:11.897120] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.760 [2024-11-19 16:36:11.897127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.897134] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ff40) on tqpair=0x16d5650 00:30:21.760 [2024-11-19 16:36:11.897146] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:21.760 [2024-11-19 16:36:11.897172] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:21.760 [2024-11-19 16:36:11.897182] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:21.760 [2024-11-19 16:36:11.897200] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.897210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.897216] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d5650) 00:30:21.760 [2024-11-19 16:36:11.897228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.760 [2024-11-19 16:36:11.897252] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ff40, cid 0, qid 0 00:30:21.760 [2024-11-19 16:36:11.897343] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.760 [2024-11-19 16:36:11.897357] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.760 [2024-11-19 16:36:11.897364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.897370] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ff40) on tqpair=0x16d5650 00:30:21.760 [2024-11-19 16:36:11.897378] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:21.760 [2024-11-19 16:36:11.897392] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:21.760 [2024-11-19 16:36:11.897405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.897412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.897419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d5650) 00:30:21.760 [2024-11-19 16:36:11.897429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.760 [2024-11-19 16:36:11.897456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ff40, cid 0, qid 0 00:30:21.760 [2024-11-19 16:36:11.897535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.760 [2024-11-19 16:36:11.897549] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.760 [2024-11-19 16:36:11.897556] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.897562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ff40) on tqpair=0x16d5650 00:30:21.760 [2024-11-19 16:36:11.897571] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:21.760 [2024-11-19 16:36:11.897585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:21.760 [2024-11-19 16:36:11.897597] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.897604] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.760 [2024-11-19 16:36:11.897610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d5650) 00:30:21.760 [2024-11-19 16:36:11.897620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.760 [2024-11-19 16:36:11.897643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ff40, cid 0, qid 0 00:30:21.760 [2024-11-19 16:36:11.897722] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.760 [2024-11-19 16:36:11.897736] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.761 [2024-11-19 16:36:11.897742] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.897749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ff40) on tqpair=0x16d5650 00:30:21.761 [2024-11-19 16:36:11.897757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:21.761 [2024-11-19 16:36:11.897774] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.897798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.897804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d5650) 00:30:21.761 [2024-11-19 16:36:11.897814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.761 [2024-11-19 16:36:11.897835] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ff40, cid 0, qid 0 00:30:21.761 [2024-11-19 16:36:11.897925] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.761 [2024-11-19 16:36:11.897938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.761 [2024-11-19 16:36:11.897945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.897951] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ff40) on tqpair=0x16d5650 00:30:21.761 [2024-11-19 16:36:11.897959] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:21.761 [2024-11-19 16:36:11.897967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:21.761 [2024-11-19 16:36:11.897980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:21.761 [2024-11-19 16:36:11.898090] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:21.761 [2024-11-19 16:36:11.898101] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:21.761 [2024-11-19 16:36:11.898113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898120] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d5650) 00:30:21.761 [2024-11-19 16:36:11.898142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.761 [2024-11-19 16:36:11.898164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ff40, cid 0, qid 0 00:30:21.761 [2024-11-19 16:36:11.898241] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.761 [2024-11-19 16:36:11.898255] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.761 [2024-11-19 16:36:11.898261] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ff40) on tqpair=0x16d5650 00:30:21.761 [2024-11-19 16:36:11.898276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:21.761 [2024-11-19 16:36:11.898293] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898308] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d5650) 00:30:21.761 [2024-11-19 16:36:11.898319] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.761 [2024-11-19 16:36:11.898340] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ff40, cid 0, qid 0 00:30:21.761 [2024-11-19 16:36:11.898425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.761 [2024-11-19 16:36:11.898437] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.761 [2024-11-19 16:36:11.898444] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ff40) on tqpair=0x16d5650 00:30:21.761 [2024-11-19 16:36:11.898458] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:21.761 [2024-11-19 16:36:11.898466] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:21.761 [2024-11-19 16:36:11.898479] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:21.761 [2024-11-19 16:36:11.898497] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:21.761 [2024-11-19 16:36:11.898511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898519] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d5650) 00:30:21.761 [2024-11-19 16:36:11.898529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.761 [2024-11-19 16:36:11.898551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ff40, cid 0, qid 0 00:30:21.761 [2024-11-19 16:36:11.898675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.761 [2024-11-19 16:36:11.898690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.761 [2024-11-19 16:36:11.898696] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898703] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d5650): datao=0, datal=4096, cccid=0 00:30:21.761 [2024-11-19 16:36:11.898710] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x172ff40) on tqpair(0x16d5650): expected_datao=0, payload_size=4096 00:30:21.761 [2024-11-19 16:36:11.898717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898727] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898735] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898750] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.761 [2024-11-19 16:36:11.898761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.761 [2024-11-19 16:36:11.898768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898774] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ff40) on tqpair=0x16d5650 00:30:21.761 [2024-11-19 16:36:11.898784] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:21.761 [2024-11-19 16:36:11.898793] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:21.761 [2024-11-19 16:36:11.898800] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:21.761 [2024-11-19 16:36:11.898811] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:21.761 [2024-11-19 16:36:11.898820] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:21.761 [2024-11-19 16:36:11.898828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:21.761 [2024-11-19 16:36:11.898847] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:21.761 [2024-11-19 16:36:11.898861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.898874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d5650) 00:30:21.761 [2024-11-19 16:36:11.898885] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:21.761 [2024-11-19 16:36:11.898907] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ff40, cid 0, qid 0 00:30:21.761 [2024-11-19 16:36:11.898988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.761 [2024-11-19 16:36:11.899002] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.761 [2024-11-19 16:36:11.899008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.899015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ff40) on tqpair=0x16d5650 00:30:21.761 [2024-11-19 16:36:11.899025] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.899033] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.899039] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16d5650) 00:30:21.761 [2024-11-19 16:36:11.899049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.761 [2024-11-19 16:36:11.899059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.899066] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.899083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16d5650) 00:30:21.761 [2024-11-19 16:36:11.899093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.761 [2024-11-19 16:36:11.899103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.899110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.761 [2024-11-19 16:36:11.899116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16d5650) 00:30:21.762 [2024-11-19 16:36:11.899125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.762 [2024-11-19 16:36:11.899135] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899141] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899148] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.762 [2024-11-19 16:36:11.899160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.762 [2024-11-19 16:36:11.899170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.899185] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.899197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d5650) 00:30:21.762 [2024-11-19 16:36:11.899214] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.762 [2024-11-19 16:36:11.899237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x172ff40, cid 0, qid 0 00:30:21.762 [2024-11-19 16:36:11.899249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17300c0, cid 1, qid 0 00:30:21.762 [2024-11-19 16:36:11.899257] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1730240, cid 2, qid 0 00:30:21.762 [2024-11-19 16:36:11.899265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.762 [2024-11-19 16:36:11.899272] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1730540, cid 4, qid 0 00:30:21.762 [2024-11-19 16:36:11.899386] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.762 [2024-11-19 16:36:11.899400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.762 [2024-11-19 16:36:11.899407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1730540) on tqpair=0x16d5650 00:30:21.762 [2024-11-19 16:36:11.899426] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:21.762 [2024-11-19 16:36:11.899436] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.899451] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.899462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.899472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899479] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d5650) 00:30:21.762 [2024-11-19 16:36:11.899496] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:21.762 [2024-11-19 16:36:11.899518] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1730540, cid 4, qid 0 00:30:21.762 [2024-11-19 16:36:11.899595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.762 [2024-11-19 16:36:11.899608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.762 [2024-11-19 16:36:11.899615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1730540) on tqpair=0x16d5650 00:30:21.762 [2024-11-19 16:36:11.899689] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.899709] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.899723] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899734] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d5650) 00:30:21.762 [2024-11-19 16:36:11.899745] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.762 [2024-11-19 16:36:11.899767] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1730540, cid 4, qid 0 00:30:21.762 [2024-11-19 16:36:11.899867] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.762 [2024-11-19 16:36:11.899881] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.762 [2024-11-19 16:36:11.899888] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899894] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d5650): datao=0, datal=4096, cccid=4 00:30:21.762 [2024-11-19 16:36:11.899902] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1730540) on tqpair(0x16d5650): expected_datao=0, payload_size=4096 00:30:21.762 [2024-11-19 16:36:11.899909] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899919] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.899927] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.940143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.762 [2024-11-19 16:36:11.940162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.762 [2024-11-19 16:36:11.940169] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.940176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1730540) on tqpair=0x16d5650 00:30:21.762 [2024-11-19 16:36:11.940191] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:21.762 [2024-11-19 16:36:11.940213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.940232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.940246] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.940254] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d5650) 00:30:21.762 [2024-11-19 16:36:11.940265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.762 [2024-11-19 16:36:11.940289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1730540, cid 4, qid 0 00:30:21.762 [2024-11-19 16:36:11.940392] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.762 [2024-11-19 16:36:11.940407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.762 [2024-11-19 16:36:11.940414] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.940420] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d5650): datao=0, datal=4096, cccid=4 00:30:21.762 [2024-11-19 16:36:11.940427] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1730540) on tqpair(0x16d5650): expected_datao=0, payload_size=4096 00:30:21.762 [2024-11-19 16:36:11.940435] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.940452] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.940461] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.981138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.762 [2024-11-19 16:36:11.981157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.762 [2024-11-19 16:36:11.981164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.981171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1730540) on tqpair=0x16d5650 00:30:21.762 [2024-11-19 16:36:11.981192] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.981216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:21.762 [2024-11-19 16:36:11.981232] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.762 [2024-11-19 16:36:11.981240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d5650) 00:30:21.762 [2024-11-19 16:36:11.981251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.762 [2024-11-19 16:36:11.981274] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1730540, cid 4, qid 0 00:30:21.763 [2024-11-19 16:36:11.981372] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.763 [2024-11-19 16:36:11.981386] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.763 [2024-11-19 16:36:11.981393] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981399] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d5650): datao=0, datal=4096, cccid=4 00:30:21.763 [2024-11-19 16:36:11.981407] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1730540) on tqpair(0x16d5650): expected_datao=0, payload_size=4096 00:30:21.763 [2024-11-19 16:36:11.981414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981424] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981432] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981443] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.763 [2024-11-19 16:36:11.981453] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.763 [2024-11-19 16:36:11.981459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981466] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1730540) on tqpair=0x16d5650 00:30:21.763 [2024-11-19 16:36:11.981478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:21.763 [2024-11-19 16:36:11.981493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:21.763 [2024-11-19 16:36:11.981508] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:21.763 [2024-11-19 16:36:11.981519] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:21.763 [2024-11-19 16:36:11.981528] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:21.763 [2024-11-19 16:36:11.981536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:21.763 [2024-11-19 16:36:11.981545] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:21.763 [2024-11-19 16:36:11.981552] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:21.763 [2024-11-19 16:36:11.981560] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:21.763 [2024-11-19 16:36:11.981579] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981588] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d5650) 00:30:21.763 [2024-11-19 16:36:11.981598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.763 [2024-11-19 16:36:11.981610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981627] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d5650) 00:30:21.763 [2024-11-19 16:36:11.981637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:21.763 [2024-11-19 16:36:11.981663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1730540, cid 4, qid 0 00:30:21.763 [2024-11-19 16:36:11.981675] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17306c0, cid 5, qid 0 00:30:21.763 [2024-11-19 16:36:11.981762] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.763 [2024-11-19 16:36:11.981774] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.763 [2024-11-19 16:36:11.981780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981787] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1730540) on tqpair=0x16d5650 00:30:21.763 [2024-11-19 16:36:11.981797] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.763 [2024-11-19 16:36:11.981806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.763 [2024-11-19 16:36:11.981813] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981819] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17306c0) on tqpair=0x16d5650 00:30:21.763 [2024-11-19 16:36:11.981835] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981844] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d5650) 00:30:21.763 [2024-11-19 16:36:11.981854] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.763 [2024-11-19 16:36:11.981875] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17306c0, cid 5, qid 0 00:30:21.763 [2024-11-19 16:36:11.981961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.763 [2024-11-19 16:36:11.981975] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.763 [2024-11-19 16:36:11.981981] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.981988] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17306c0) on tqpair=0x16d5650 00:30:21.763 [2024-11-19 16:36:11.982004] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d5650) 00:30:21.763 [2024-11-19 16:36:11.982023] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.763 [2024-11-19 16:36:11.982044] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17306c0, cid 5, qid 0 00:30:21.763 [2024-11-19 16:36:11.982129] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.763 [2024-11-19 16:36:11.982143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.763 [2024-11-19 16:36:11.982150] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17306c0) on tqpair=0x16d5650 00:30:21.763 [2024-11-19 16:36:11.982172] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d5650) 00:30:21.763 [2024-11-19 16:36:11.982191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.763 [2024-11-19 16:36:11.982212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17306c0, cid 5, qid 0 00:30:21.763 [2024-11-19 16:36:11.982285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.763 [2024-11-19 16:36:11.982297] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.763 [2024-11-19 16:36:11.982304] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982310] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17306c0) on tqpair=0x16d5650 00:30:21.763 [2024-11-19 16:36:11.982337] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982349] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16d5650) 00:30:21.763 [2024-11-19 16:36:11.982359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.763 [2024-11-19 16:36:11.982371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16d5650) 00:30:21.763 [2024-11-19 16:36:11.982389] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.763 [2024-11-19 16:36:11.982400] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982408] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x16d5650) 00:30:21.763 [2024-11-19 16:36:11.982417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.763 [2024-11-19 16:36:11.982429] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982436] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16d5650) 00:30:21.763 [2024-11-19 16:36:11.982445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.763 [2024-11-19 16:36:11.982468] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17306c0, cid 5, qid 0 00:30:21.763 [2024-11-19 16:36:11.982479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1730540, cid 4, qid 0 00:30:21.763 [2024-11-19 16:36:11.982487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1730840, cid 6, qid 0 00:30:21.763 [2024-11-19 16:36:11.982495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17309c0, cid 7, qid 0 00:30:21.763 [2024-11-19 16:36:11.982672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.763 [2024-11-19 16:36:11.982685] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.763 [2024-11-19 16:36:11.982691] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982697] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d5650): datao=0, datal=8192, cccid=5 00:30:21.763 [2024-11-19 16:36:11.982704] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17306c0) on tqpair(0x16d5650): expected_datao=0, payload_size=8192 00:30:21.763 [2024-11-19 16:36:11.982727] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982748] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982757] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.763 [2024-11-19 16:36:11.982766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.763 [2024-11-19 16:36:11.982775] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.764 [2024-11-19 16:36:11.982781] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982788] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d5650): datao=0, datal=512, cccid=4 00:30:21.764 [2024-11-19 16:36:11.982795] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1730540) on tqpair(0x16d5650): expected_datao=0, payload_size=512 00:30:21.764 [2024-11-19 16:36:11.982802] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982811] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982818] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982826] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.764 [2024-11-19 16:36:11.982835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.764 [2024-11-19 16:36:11.982844] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982851] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d5650): datao=0, datal=512, cccid=6 00:30:21.764 [2024-11-19 16:36:11.982859] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1730840) on tqpair(0x16d5650): expected_datao=0, payload_size=512 00:30:21.764 [2024-11-19 16:36:11.982866] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982875] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982881] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982890] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:21.764 [2024-11-19 16:36:11.982898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:21.764 [2024-11-19 16:36:11.982905] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982911] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16d5650): datao=0, datal=4096, cccid=7 00:30:21.764 [2024-11-19 16:36:11.982918] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17309c0) on tqpair(0x16d5650): expected_datao=0, payload_size=4096 00:30:21.764 [2024-11-19 16:36:11.982925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982934] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982941] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.764 [2024-11-19 16:36:11.982961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.764 [2024-11-19 16:36:11.982968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.982974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17306c0) on tqpair=0x16d5650 00:30:21.764 [2024-11-19 16:36:11.982995] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.764 [2024-11-19 16:36:11.983021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.764 [2024-11-19 16:36:11.983028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.983034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1730540) on tqpair=0x16d5650 00:30:21.764 [2024-11-19 16:36:11.983049] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.764 [2024-11-19 16:36:11.983060] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.764 [2024-11-19 16:36:11.983066] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.983096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1730840) on tqpair=0x16d5650 00:30:21.764 [2024-11-19 16:36:11.983108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.764 [2024-11-19 16:36:11.983118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.764 [2024-11-19 16:36:11.983125] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.764 [2024-11-19 16:36:11.983131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17309c0) on tqpair=0x16d5650 00:30:21.764 ===================================================== 00:30:21.764 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:21.764 ===================================================== 00:30:21.764 Controller Capabilities/Features 00:30:21.764 ================================ 00:30:21.764 Vendor ID: 8086 00:30:21.764 Subsystem Vendor ID: 8086 00:30:21.764 Serial Number: SPDK00000000000001 00:30:21.764 Model Number: SPDK bdev Controller 00:30:21.764 Firmware Version: 25.01 00:30:21.764 Recommended Arb Burst: 6 00:30:21.764 IEEE OUI Identifier: e4 d2 5c 00:30:21.764 Multi-path I/O 00:30:21.764 May have multiple subsystem ports: Yes 00:30:21.764 May have multiple controllers: Yes 00:30:21.764 Associated with SR-IOV VF: No 00:30:21.764 Max Data Transfer Size: 131072 00:30:21.764 Max Number of Namespaces: 32 00:30:21.764 Max Number of I/O Queues: 127 00:30:21.764 NVMe Specification Version (VS): 1.3 00:30:21.764 NVMe Specification Version (Identify): 1.3 00:30:21.764 Maximum Queue Entries: 128 00:30:21.764 Contiguous Queues Required: Yes 00:30:21.764 Arbitration Mechanisms Supported 00:30:21.764 Weighted Round Robin: Not Supported 00:30:21.764 Vendor Specific: Not Supported 00:30:21.764 Reset Timeout: 15000 ms 00:30:21.764 Doorbell Stride: 4 bytes 00:30:21.764 NVM Subsystem Reset: Not Supported 00:30:21.764 Command Sets Supported 00:30:21.764 NVM Command Set: Supported 00:30:21.764 Boot Partition: Not Supported 00:30:21.764 Memory Page Size Minimum: 4096 bytes 00:30:21.764 Memory Page Size Maximum: 4096 bytes 00:30:21.764 Persistent Memory Region: Not Supported 00:30:21.764 Optional Asynchronous Events Supported 00:30:21.764 Namespace Attribute Notices: Supported 00:30:21.764 Firmware Activation Notices: Not Supported 00:30:21.764 ANA Change Notices: Not Supported 00:30:21.764 PLE Aggregate Log Change Notices: Not Supported 00:30:21.764 LBA Status Info Alert Notices: Not Supported 00:30:21.764 EGE Aggregate Log Change Notices: Not Supported 00:30:21.764 Normal NVM Subsystem Shutdown event: Not Supported 00:30:21.764 Zone Descriptor Change Notices: Not Supported 00:30:21.764 Discovery Log Change Notices: Not Supported 00:30:21.764 Controller Attributes 00:30:21.764 128-bit Host Identifier: Supported 00:30:21.764 Non-Operational Permissive Mode: Not Supported 00:30:21.764 NVM Sets: Not Supported 00:30:21.764 Read Recovery Levels: Not Supported 00:30:21.764 Endurance Groups: Not Supported 00:30:21.764 Predictable Latency Mode: Not Supported 00:30:21.764 Traffic Based Keep ALive: Not Supported 00:30:21.764 Namespace Granularity: Not Supported 00:30:21.764 SQ Associations: Not Supported 00:30:21.764 UUID List: Not Supported 00:30:21.764 Multi-Domain Subsystem: Not Supported 00:30:21.764 Fixed Capacity Management: Not Supported 00:30:21.764 Variable Capacity Management: Not Supported 00:30:21.764 Delete Endurance Group: Not Supported 00:30:21.764 Delete NVM Set: Not Supported 00:30:21.764 Extended LBA Formats Supported: Not Supported 00:30:21.764 Flexible Data Placement Supported: Not Supported 00:30:21.764 00:30:21.764 Controller Memory Buffer Support 00:30:21.764 ================================ 00:30:21.764 Supported: No 00:30:21.764 00:30:21.764 Persistent Memory Region Support 00:30:21.764 ================================ 00:30:21.764 Supported: No 00:30:21.764 00:30:21.764 Admin Command Set Attributes 00:30:21.764 ============================ 00:30:21.764 Security Send/Receive: Not Supported 00:30:21.764 Format NVM: Not Supported 00:30:21.764 Firmware Activate/Download: Not Supported 00:30:21.764 Namespace Management: Not Supported 00:30:21.764 Device Self-Test: Not Supported 00:30:21.764 Directives: Not Supported 00:30:21.764 NVMe-MI: Not Supported 00:30:21.764 Virtualization Management: Not Supported 00:30:21.765 Doorbell Buffer Config: Not Supported 00:30:21.765 Get LBA Status Capability: Not Supported 00:30:21.765 Command & Feature Lockdown Capability: Not Supported 00:30:21.765 Abort Command Limit: 4 00:30:21.765 Async Event Request Limit: 4 00:30:21.765 Number of Firmware Slots: N/A 00:30:21.765 Firmware Slot 1 Read-Only: N/A 00:30:21.765 Firmware Activation Without Reset: N/A 00:30:21.765 Multiple Update Detection Support: N/A 00:30:21.765 Firmware Update Granularity: No Information Provided 00:30:21.765 Per-Namespace SMART Log: No 00:30:21.765 Asymmetric Namespace Access Log Page: Not Supported 00:30:21.765 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:21.765 Command Effects Log Page: Supported 00:30:21.765 Get Log Page Extended Data: Supported 00:30:21.765 Telemetry Log Pages: Not Supported 00:30:21.765 Persistent Event Log Pages: Not Supported 00:30:21.765 Supported Log Pages Log Page: May Support 00:30:21.765 Commands Supported & Effects Log Page: Not Supported 00:30:21.765 Feature Identifiers & Effects Log Page:May Support 00:30:21.765 NVMe-MI Commands & Effects Log Page: May Support 00:30:21.765 Data Area 4 for Telemetry Log: Not Supported 00:30:21.765 Error Log Page Entries Supported: 128 00:30:21.765 Keep Alive: Supported 00:30:21.765 Keep Alive Granularity: 10000 ms 00:30:21.765 00:30:21.765 NVM Command Set Attributes 00:30:21.765 ========================== 00:30:21.765 Submission Queue Entry Size 00:30:21.765 Max: 64 00:30:21.765 Min: 64 00:30:21.765 Completion Queue Entry Size 00:30:21.765 Max: 16 00:30:21.765 Min: 16 00:30:21.765 Number of Namespaces: 32 00:30:21.765 Compare Command: Supported 00:30:21.765 Write Uncorrectable Command: Not Supported 00:30:21.765 Dataset Management Command: Supported 00:30:21.765 Write Zeroes Command: Supported 00:30:21.765 Set Features Save Field: Not Supported 00:30:21.765 Reservations: Supported 00:30:21.765 Timestamp: Not Supported 00:30:21.765 Copy: Supported 00:30:21.765 Volatile Write Cache: Present 00:30:21.765 Atomic Write Unit (Normal): 1 00:30:21.765 Atomic Write Unit (PFail): 1 00:30:21.765 Atomic Compare & Write Unit: 1 00:30:21.765 Fused Compare & Write: Supported 00:30:21.765 Scatter-Gather List 00:30:21.765 SGL Command Set: Supported 00:30:21.765 SGL Keyed: Supported 00:30:21.765 SGL Bit Bucket Descriptor: Not Supported 00:30:21.765 SGL Metadata Pointer: Not Supported 00:30:21.765 Oversized SGL: Not Supported 00:30:21.765 SGL Metadata Address: Not Supported 00:30:21.765 SGL Offset: Supported 00:30:21.765 Transport SGL Data Block: Not Supported 00:30:21.765 Replay Protected Memory Block: Not Supported 00:30:21.765 00:30:21.765 Firmware Slot Information 00:30:21.765 ========================= 00:30:21.765 Active slot: 1 00:30:21.765 Slot 1 Firmware Revision: 25.01 00:30:21.765 00:30:21.765 00:30:21.765 Commands Supported and Effects 00:30:21.765 ============================== 00:30:21.765 Admin Commands 00:30:21.765 -------------- 00:30:21.765 Get Log Page (02h): Supported 00:30:21.765 Identify (06h): Supported 00:30:21.765 Abort (08h): Supported 00:30:21.765 Set Features (09h): Supported 00:30:21.765 Get Features (0Ah): Supported 00:30:21.765 Asynchronous Event Request (0Ch): Supported 00:30:21.765 Keep Alive (18h): Supported 00:30:21.765 I/O Commands 00:30:21.765 ------------ 00:30:21.765 Flush (00h): Supported LBA-Change 00:30:21.765 Write (01h): Supported LBA-Change 00:30:21.765 Read (02h): Supported 00:30:21.765 Compare (05h): Supported 00:30:21.765 Write Zeroes (08h): Supported LBA-Change 00:30:21.765 Dataset Management (09h): Supported LBA-Change 00:30:21.765 Copy (19h): Supported LBA-Change 00:30:21.765 00:30:21.765 Error Log 00:30:21.765 ========= 00:30:21.765 00:30:21.765 Arbitration 00:30:21.765 =========== 00:30:21.765 Arbitration Burst: 1 00:30:21.765 00:30:21.765 Power Management 00:30:21.765 ================ 00:30:21.765 Number of Power States: 1 00:30:21.765 Current Power State: Power State #0 00:30:21.765 Power State #0: 00:30:21.765 Max Power: 0.00 W 00:30:21.765 Non-Operational State: Operational 00:30:21.765 Entry Latency: Not Reported 00:30:21.765 Exit Latency: Not Reported 00:30:21.765 Relative Read Throughput: 0 00:30:21.765 Relative Read Latency: 0 00:30:21.765 Relative Write Throughput: 0 00:30:21.765 Relative Write Latency: 0 00:30:21.765 Idle Power: Not Reported 00:30:21.765 Active Power: Not Reported 00:30:21.765 Non-Operational Permissive Mode: Not Supported 00:30:21.765 00:30:21.765 Health Information 00:30:21.765 ================== 00:30:21.765 Critical Warnings: 00:30:21.765 Available Spare Space: OK 00:30:21.765 Temperature: OK 00:30:21.765 Device Reliability: OK 00:30:21.765 Read Only: No 00:30:21.765 Volatile Memory Backup: OK 00:30:21.765 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:21.765 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:21.765 Available Spare: 0% 00:30:21.765 Available Spare Threshold: 0% 00:30:21.765 Life Percentage Used:[2024-11-19 16:36:11.983245] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.765 [2024-11-19 16:36:11.983257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16d5650) 00:30:21.765 [2024-11-19 16:36:11.983268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.765 [2024-11-19 16:36:11.983291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17309c0, cid 7, qid 0 00:30:21.765 [2024-11-19 16:36:11.983388] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.765 [2024-11-19 16:36:11.983401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.765 [2024-11-19 16:36:11.983408] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.765 [2024-11-19 16:36:11.983415] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17309c0) on tqpair=0x16d5650 00:30:21.765 [2024-11-19 16:36:11.983461] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:21.765 [2024-11-19 16:36:11.983481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x172ff40) on tqpair=0x16d5650 00:30:21.765 [2024-11-19 16:36:11.983492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.765 [2024-11-19 16:36:11.983501] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17300c0) on tqpair=0x16d5650 00:30:21.765 [2024-11-19 16:36:11.983509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.766 [2024-11-19 16:36:11.983517] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1730240) on tqpair=0x16d5650 00:30:21.766 [2024-11-19 16:36:11.983524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.766 [2024-11-19 16:36:11.983532] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.766 [2024-11-19 16:36:11.983540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:21.766 [2024-11-19 16:36:11.983552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.983560] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.983566] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.766 [2024-11-19 16:36:11.983576] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.766 [2024-11-19 16:36:11.983613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.766 [2024-11-19 16:36:11.983708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.766 [2024-11-19 16:36:11.983723] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.766 [2024-11-19 16:36:11.983729] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.983736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.766 [2024-11-19 16:36:11.983747] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.983755] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.983761] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.766 [2024-11-19 16:36:11.983772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.766 [2024-11-19 16:36:11.983798] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.766 [2024-11-19 16:36:11.983886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.766 [2024-11-19 16:36:11.983898] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.766 [2024-11-19 16:36:11.983904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.983911] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.766 [2024-11-19 16:36:11.983918] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:21.766 [2024-11-19 16:36:11.983926] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:21.766 [2024-11-19 16:36:11.983942] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.983951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.983957] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.766 [2024-11-19 16:36:11.983967] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.766 [2024-11-19 16:36:11.983992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.766 [2024-11-19 16:36:11.984063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.766 [2024-11-19 16:36:11.984084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.766 [2024-11-19 16:36:11.984092] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984098] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.766 [2024-11-19 16:36:11.984115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984131] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.766 [2024-11-19 16:36:11.984141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.766 [2024-11-19 16:36:11.984162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.766 [2024-11-19 16:36:11.984257] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.766 [2024-11-19 16:36:11.984270] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.766 [2024-11-19 16:36:11.984277] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.766 [2024-11-19 16:36:11.984300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984315] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.766 [2024-11-19 16:36:11.984325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.766 [2024-11-19 16:36:11.984346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.766 [2024-11-19 16:36:11.984421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.766 [2024-11-19 16:36:11.984435] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.766 [2024-11-19 16:36:11.984441] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984448] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.766 [2024-11-19 16:36:11.984464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984473] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.766 [2024-11-19 16:36:11.984489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.766 [2024-11-19 16:36:11.984510] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.766 [2024-11-19 16:36:11.984584] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.766 [2024-11-19 16:36:11.984597] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.766 [2024-11-19 16:36:11.984604] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.766 [2024-11-19 16:36:11.984627] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.766 [2024-11-19 16:36:11.984652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.766 [2024-11-19 16:36:11.984673] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.766 [2024-11-19 16:36:11.984741] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.766 [2024-11-19 16:36:11.984754] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.766 [2024-11-19 16:36:11.984761] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.766 [2024-11-19 16:36:11.984783] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984792] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.766 [2024-11-19 16:36:11.984808] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.766 [2024-11-19 16:36:11.984829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.766 [2024-11-19 16:36:11.984919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.766 [2024-11-19 16:36:11.984931] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.766 [2024-11-19 16:36:11.984937] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.766 [2024-11-19 16:36:11.984944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.767 [2024-11-19 16:36:11.984959] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.984968] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.984975] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.767 [2024-11-19 16:36:11.984985] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.767 [2024-11-19 16:36:11.985005] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.767 [2024-11-19 16:36:11.985098] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.767 [2024-11-19 16:36:11.985112] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.767 [2024-11-19 16:36:11.985119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985126] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.767 [2024-11-19 16:36:11.985142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985151] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.767 [2024-11-19 16:36:11.985168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.767 [2024-11-19 16:36:11.985188] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.767 [2024-11-19 16:36:11.985259] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.767 [2024-11-19 16:36:11.985271] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.767 [2024-11-19 16:36:11.985278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985284] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.767 [2024-11-19 16:36:11.985300] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985316] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.767 [2024-11-19 16:36:11.985326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.767 [2024-11-19 16:36:11.985346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.767 [2024-11-19 16:36:11.985422] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.767 [2024-11-19 16:36:11.985442] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.767 [2024-11-19 16:36:11.985450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985457] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.767 [2024-11-19 16:36:11.985473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.767 [2024-11-19 16:36:11.985499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.767 [2024-11-19 16:36:11.985520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.767 [2024-11-19 16:36:11.985595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.767 [2024-11-19 16:36:11.985608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.767 [2024-11-19 16:36:11.985615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.767 [2024-11-19 16:36:11.985638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985647] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985653] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.767 [2024-11-19 16:36:11.985663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.767 [2024-11-19 16:36:11.985684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.767 [2024-11-19 16:36:11.985757] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.767 [2024-11-19 16:36:11.985769] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.767 [2024-11-19 16:36:11.985775] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.767 [2024-11-19 16:36:11.985798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985807] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985813] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.767 [2024-11-19 16:36:11.985823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.767 [2024-11-19 16:36:11.985843] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.767 [2024-11-19 16:36:11.985969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.767 [2024-11-19 16:36:11.985983] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.767 [2024-11-19 16:36:11.985989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.985996] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.767 [2024-11-19 16:36:11.986012] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.986021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.986027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.767 [2024-11-19 16:36:11.986037] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.767 [2024-11-19 16:36:11.986058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.767 [2024-11-19 16:36:11.986141] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.767 [2024-11-19 16:36:11.986154] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.767 [2024-11-19 16:36:11.986164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.986172] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.767 [2024-11-19 16:36:11.986188] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.986197] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.986203] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.767 [2024-11-19 16:36:11.986213] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.767 [2024-11-19 16:36:11.986234] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.767 [2024-11-19 16:36:11.986308] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.767 [2024-11-19 16:36:11.986321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.767 [2024-11-19 16:36:11.986327] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.986334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.767 [2024-11-19 16:36:11.986349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.986359] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.986365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.767 [2024-11-19 16:36:11.986375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.767 [2024-11-19 16:36:11.986396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.767 [2024-11-19 16:36:11.986467] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.767 [2024-11-19 16:36:11.986480] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.767 [2024-11-19 16:36:11.986487] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.767 [2024-11-19 16:36:11.986493] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.767 [2024-11-19 16:36:11.986509] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.986519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.986525] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.768 [2024-11-19 16:36:11.986535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.768 [2024-11-19 16:36:11.986556] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.768 [2024-11-19 16:36:11.986630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.768 [2024-11-19 16:36:11.986642] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.768 [2024-11-19 16:36:11.986648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.986655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.768 [2024-11-19 16:36:11.986670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.986679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.986686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.768 [2024-11-19 16:36:11.986696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.768 [2024-11-19 16:36:11.986716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.768 [2024-11-19 16:36:11.986840] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.768 [2024-11-19 16:36:11.986852] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.768 [2024-11-19 16:36:11.986859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.986869] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.768 [2024-11-19 16:36:11.986885] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.986895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.986901] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.768 [2024-11-19 16:36:11.986911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.768 [2024-11-19 16:36:11.986931] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.768 [2024-11-19 16:36:11.987005] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.768 [2024-11-19 16:36:11.987017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.768 [2024-11-19 16:36:11.987024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987030] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.768 [2024-11-19 16:36:11.987046] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.768 [2024-11-19 16:36:11.987081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.768 [2024-11-19 16:36:11.987104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.768 [2024-11-19 16:36:11.987177] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.768 [2024-11-19 16:36:11.987190] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.768 [2024-11-19 16:36:11.987197] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.768 [2024-11-19 16:36:11.987220] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987235] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.768 [2024-11-19 16:36:11.987246] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.768 [2024-11-19 16:36:11.987267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.768 [2024-11-19 16:36:11.987341] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.768 [2024-11-19 16:36:11.987353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.768 [2024-11-19 16:36:11.987359] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.768 [2024-11-19 16:36:11.987381] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987391] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.768 [2024-11-19 16:36:11.987407] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.768 [2024-11-19 16:36:11.987427] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.768 [2024-11-19 16:36:11.987500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.768 [2024-11-19 16:36:11.987512] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.768 [2024-11-19 16:36:11.987519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987525] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.768 [2024-11-19 16:36:11.987545] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987555] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.768 [2024-11-19 16:36:11.987572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.768 [2024-11-19 16:36:11.987592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.768 [2024-11-19 16:36:11.987718] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.768 [2024-11-19 16:36:11.987730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.768 [2024-11-19 16:36:11.987736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987743] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.768 [2024-11-19 16:36:11.987758] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987768] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987774] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.768 [2024-11-19 16:36:11.987784] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.768 [2024-11-19 16:36:11.987804] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.768 [2024-11-19 16:36:11.987878] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.768 [2024-11-19 16:36:11.987892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.768 [2024-11-19 16:36:11.987898] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987905] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.768 [2024-11-19 16:36:11.987921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987930] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.987936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.768 [2024-11-19 16:36:11.987946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.768 [2024-11-19 16:36:11.987967] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.768 [2024-11-19 16:36:11.988041] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.768 [2024-11-19 16:36:11.988053] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.768 [2024-11-19 16:36:11.988059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.988066] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.768 [2024-11-19 16:36:11.992098] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.992110] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:21.768 [2024-11-19 16:36:11.992116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16d5650) 00:30:21.768 [2024-11-19 16:36:11.992127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:21.768 [2024-11-19 16:36:11.992149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17303c0, cid 3, qid 0 00:30:21.768 [2024-11-19 16:36:11.992279] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:21.768 [2024-11-19 16:36:11.992292] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:21.769 [2024-11-19 16:36:11.992299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:21.769 [2024-11-19 16:36:11.992306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17303c0) on tqpair=0x16d5650 00:30:21.769 [2024-11-19 16:36:11.992319] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 8 milliseconds 00:30:21.769 0% 00:30:21.769 Data Units Read: 0 00:30:21.769 Data Units Written: 0 00:30:21.769 Host Read Commands: 0 00:30:21.769 Host Write Commands: 0 00:30:21.769 Controller Busy Time: 0 minutes 00:30:21.769 Power Cycles: 0 00:30:21.769 Power On Hours: 0 hours 00:30:21.769 Unsafe Shutdowns: 0 00:30:21.769 Unrecoverable Media Errors: 0 00:30:21.769 Lifetime Error Log Entries: 0 00:30:21.769 Warning Temperature Time: 0 minutes 00:30:21.769 Critical Temperature Time: 0 minutes 00:30:21.769 00:30:21.769 Number of Queues 00:30:21.769 ================ 00:30:21.769 Number of I/O Submission Queues: 127 00:30:21.769 Number of I/O Completion Queues: 127 00:30:21.769 00:30:21.769 Active Namespaces 00:30:21.769 ================= 00:30:21.769 Namespace ID:1 00:30:21.769 Error Recovery Timeout: Unlimited 00:30:21.769 Command Set Identifier: NVM (00h) 00:30:21.769 Deallocate: Supported 00:30:21.769 Deallocated/Unwritten Error: Not Supported 00:30:21.769 Deallocated Read Value: Unknown 00:30:21.769 Deallocate in Write Zeroes: Not Supported 00:30:21.769 Deallocated Guard Field: 0xFFFF 00:30:21.769 Flush: Supported 00:30:21.769 Reservation: Supported 00:30:21.769 Namespace Sharing Capabilities: Multiple Controllers 00:30:21.769 Size (in LBAs): 131072 (0GiB) 00:30:21.769 Capacity (in LBAs): 131072 (0GiB) 00:30:21.769 Utilization (in LBAs): 131072 (0GiB) 00:30:21.769 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:21.769 EUI64: ABCDEF0123456789 00:30:21.769 UUID: d59b8961-67da-40e2-96a1-6885006f6616 00:30:21.769 Thin Provisioning: Not Supported 00:30:21.769 Per-NS Atomic Units: Yes 00:30:21.769 Atomic Boundary Size (Normal): 0 00:30:21.769 Atomic Boundary Size (PFail): 0 00:30:21.769 Atomic Boundary Offset: 0 00:30:21.769 Maximum Single Source Range Length: 65535 00:30:21.769 Maximum Copy Length: 65535 00:30:21.769 Maximum Source Range Count: 1 00:30:21.769 NGUID/EUI64 Never Reused: No 00:30:21.769 Namespace Write Protected: No 00:30:21.769 Number of LBA Formats: 1 00:30:21.769 Current LBA Format: LBA Format #00 00:30:21.769 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:21.769 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:21.769 rmmod nvme_tcp 00:30:21.769 rmmod nvme_fabrics 00:30:21.769 rmmod nvme_keyring 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 339698 ']' 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 339698 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 339698 ']' 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 339698 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:21.769 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 339698 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 339698' 00:30:22.028 killing process with pid 339698 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 339698 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 339698 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:22.028 16:36:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.567 16:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:24.567 00:30:24.567 real 0m5.550s 00:30:24.567 user 0m4.531s 00:30:24.567 sys 0m1.906s 00:30:24.567 16:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:24.567 16:36:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:24.567 ************************************ 00:30:24.567 END TEST nvmf_identify 00:30:24.567 ************************************ 00:30:24.567 16:36:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:24.567 16:36:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:24.567 16:36:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:24.568 ************************************ 00:30:24.568 START TEST nvmf_perf 00:30:24.568 ************************************ 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:24.568 * Looking for test storage... 00:30:24.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:24.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.568 --rc genhtml_branch_coverage=1 00:30:24.568 --rc genhtml_function_coverage=1 00:30:24.568 --rc genhtml_legend=1 00:30:24.568 --rc geninfo_all_blocks=1 00:30:24.568 --rc geninfo_unexecuted_blocks=1 00:30:24.568 00:30:24.568 ' 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:24.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.568 --rc genhtml_branch_coverage=1 00:30:24.568 --rc genhtml_function_coverage=1 00:30:24.568 --rc genhtml_legend=1 00:30:24.568 --rc geninfo_all_blocks=1 00:30:24.568 --rc geninfo_unexecuted_blocks=1 00:30:24.568 00:30:24.568 ' 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:24.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.568 --rc genhtml_branch_coverage=1 00:30:24.568 --rc genhtml_function_coverage=1 00:30:24.568 --rc genhtml_legend=1 00:30:24.568 --rc geninfo_all_blocks=1 00:30:24.568 --rc geninfo_unexecuted_blocks=1 00:30:24.568 00:30:24.568 ' 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:24.568 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:24.568 --rc genhtml_branch_coverage=1 00:30:24.568 --rc genhtml_function_coverage=1 00:30:24.568 --rc genhtml_legend=1 00:30:24.568 --rc geninfo_all_blocks=1 00:30:24.568 --rc geninfo_unexecuted_blocks=1 00:30:24.568 00:30:24.568 ' 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:24.568 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:24.569 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:24.569 16:36:14 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.500 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:26.501 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:26.501 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:26.501 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:26.501 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:26.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:26.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:30:26.501 00:30:26.501 --- 10.0.0.2 ping statistics --- 00:30:26.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.501 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:26.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:26.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:30:26.501 00:30:26.501 --- 10.0.0.1 ping statistics --- 00:30:26.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:26.501 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=341768 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 341768 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 341768 ']' 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:26.501 16:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:26.760 [2024-11-19 16:36:16.870485] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:30:26.760 [2024-11-19 16:36:16.870571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.760 [2024-11-19 16:36:16.944029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.760 [2024-11-19 16:36:16.993801] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.760 [2024-11-19 16:36:16.993875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.760 [2024-11-19 16:36:16.993903] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.760 [2024-11-19 16:36:16.993914] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.760 [2024-11-19 16:36:16.993924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.760 [2024-11-19 16:36:16.995613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:26.760 [2024-11-19 16:36:16.995677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:26.760 [2024-11-19 16:36:16.995746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:26.760 [2024-11-19 16:36:16.995749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:27.018 16:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:27.018 16:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:27.018 16:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:27.018 16:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:27.018 16:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:27.018 16:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.018 16:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:27.018 16:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:30.301 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:30.302 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:30.302 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:30:30.302 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:30.560 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:30.560 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:30:30.560 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:30.560 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:30.560 16:36:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:30.818 [2024-11-19 16:36:21.139328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:31.077 16:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:31.335 16:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:31.335 16:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:31.593 16:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:31.593 16:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:31.851 16:36:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:32.109 [2024-11-19 16:36:22.239361] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:32.109 16:36:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:32.367 16:36:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:30:32.367 16:36:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:32.367 16:36:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:32.367 16:36:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:30:33.744 Initializing NVMe Controllers 00:30:33.744 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:30:33.744 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:30:33.744 Initialization complete. Launching workers. 00:30:33.744 ======================================================== 00:30:33.744 Latency(us) 00:30:33.744 Device Information : IOPS MiB/s Average min max 00:30:33.744 PCIE (0000:88:00.0) NSID 1 from core 0: 86310.07 337.15 370.36 38.45 4296.29 00:30:33.744 ======================================================== 00:30:33.744 Total : 86310.07 337.15 370.36 38.45 4296.29 00:30:33.744 00:30:33.744 16:36:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:35.120 Initializing NVMe Controllers 00:30:35.120 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:35.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:35.120 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:35.120 Initialization complete. Launching workers. 00:30:35.120 ======================================================== 00:30:35.120 Latency(us) 00:30:35.120 Device Information : IOPS MiB/s Average min max 00:30:35.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 106.94 0.42 9677.63 140.27 45957.43 00:30:35.120 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.97 0.23 16674.78 7938.40 50889.00 00:30:35.120 ======================================================== 00:30:35.120 Total : 166.91 0.65 12191.58 140.27 50889.00 00:30:35.120 00:30:35.120 16:36:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:36.498 Initializing NVMe Controllers 00:30:36.498 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:36.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:36.498 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:36.498 Initialization complete. Launching workers. 00:30:36.498 ======================================================== 00:30:36.498 Latency(us) 00:30:36.498 Device Information : IOPS MiB/s Average min max 00:30:36.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8522.06 33.29 3755.59 542.87 9826.60 00:30:36.498 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3824.58 14.94 8409.97 6874.63 16868.61 00:30:36.498 ======================================================== 00:30:36.498 Total : 12346.64 48.23 5197.36 542.87 16868.61 00:30:36.498 00:30:36.498 16:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:36.498 16:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:36.498 16:36:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:39.031 Initializing NVMe Controllers 00:30:39.031 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.031 Controller IO queue size 128, less than required. 00:30:39.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.031 Controller IO queue size 128, less than required. 00:30:39.031 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:39.031 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:39.031 Initialization complete. Launching workers. 00:30:39.031 ======================================================== 00:30:39.031 Latency(us) 00:30:39.031 Device Information : IOPS MiB/s Average min max 00:30:39.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1706.56 426.64 75931.13 49675.78 135693.94 00:30:39.031 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 593.78 148.45 230048.35 78731.39 349572.35 00:30:39.031 ======================================================== 00:30:39.031 Total : 2300.35 575.09 115713.01 49675.78 349572.35 00:30:39.031 00:30:39.031 16:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:39.289 No valid NVMe controllers or AIO or URING devices found 00:30:39.289 Initializing NVMe Controllers 00:30:39.289 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:39.289 Controller IO queue size 128, less than required. 00:30:39.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.289 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:39.289 Controller IO queue size 128, less than required. 00:30:39.289 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:39.289 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:39.289 WARNING: Some requested NVMe devices were skipped 00:30:39.289 16:36:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:41.827 Initializing NVMe Controllers 00:30:41.827 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:41.827 Controller IO queue size 128, less than required. 00:30:41.827 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.827 Controller IO queue size 128, less than required. 00:30:41.827 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:41.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:41.827 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:41.827 Initialization complete. Launching workers. 00:30:41.827 00:30:41.827 ==================== 00:30:41.827 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:41.827 TCP transport: 00:30:41.827 polls: 8989 00:30:41.827 idle_polls: 5883 00:30:41.827 sock_completions: 3106 00:30:41.827 nvme_completions: 5591 00:30:41.827 submitted_requests: 8420 00:30:41.827 queued_requests: 1 00:30:41.827 00:30:41.827 ==================== 00:30:41.827 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:41.827 TCP transport: 00:30:41.827 polls: 12113 00:30:41.827 idle_polls: 8781 00:30:41.827 sock_completions: 3332 00:30:41.827 nvme_completions: 6495 00:30:41.827 submitted_requests: 9784 00:30:41.827 queued_requests: 1 00:30:41.827 ======================================================== 00:30:41.827 Latency(us) 00:30:41.827 Device Information : IOPS MiB/s Average min max 00:30:41.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1394.66 348.66 94112.75 47681.40 155222.97 00:30:41.827 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1620.20 405.05 79522.13 42429.27 126990.46 00:30:41.827 ======================================================== 00:30:41.827 Total : 3014.86 753.71 86271.68 42429.27 155222.97 00:30:41.827 00:30:41.827 16:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:41.827 16:36:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:42.086 16:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:42.086 16:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:30:42.086 16:36:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:45.372 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=8774476d-457f-4d79-870a-fa107025ceef 00:30:45.372 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 8774476d-457f-4d79-870a-fa107025ceef 00:30:45.372 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=8774476d-457f-4d79-870a-fa107025ceef 00:30:45.372 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:45.372 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:45.372 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:45.372 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:45.630 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:45.630 { 00:30:45.630 "uuid": "8774476d-457f-4d79-870a-fa107025ceef", 00:30:45.630 "name": "lvs_0", 00:30:45.630 "base_bdev": "Nvme0n1", 00:30:45.630 "total_data_clusters": 238234, 00:30:45.630 "free_clusters": 238234, 00:30:45.630 "block_size": 512, 00:30:45.630 "cluster_size": 4194304 00:30:45.630 } 00:30:45.630 ]' 00:30:45.630 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8774476d-457f-4d79-870a-fa107025ceef") .free_clusters' 00:30:45.630 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:45.630 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8774476d-457f-4d79-870a-fa107025ceef") .cluster_size' 00:30:45.630 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:45.630 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:45.630 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:45.630 952936 00:30:45.630 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:45.630 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:45.630 16:36:35 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8774476d-457f-4d79-870a-fa107025ceef lbd_0 20480 00:30:46.198 16:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=1826d481-8721-4646-b8a2-bb29cb683398 00:30:46.198 16:36:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1826d481-8721-4646-b8a2-bb29cb683398 lvs_n_0 00:30:47.132 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=8a212ba7-1fe9-4b81-8d7e-a8c0abf57d73 00:30:47.132 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 8a212ba7-1fe9-4b81-8d7e-a8c0abf57d73 00:30:47.132 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=8a212ba7-1fe9-4b81-8d7e-a8c0abf57d73 00:30:47.132 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:47.132 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:47.132 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:47.132 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:47.390 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:47.390 { 00:30:47.390 "uuid": "8774476d-457f-4d79-870a-fa107025ceef", 00:30:47.390 "name": "lvs_0", 00:30:47.390 "base_bdev": "Nvme0n1", 00:30:47.390 "total_data_clusters": 238234, 00:30:47.390 "free_clusters": 233114, 00:30:47.390 "block_size": 512, 00:30:47.390 "cluster_size": 4194304 00:30:47.390 }, 00:30:47.390 { 00:30:47.390 "uuid": "8a212ba7-1fe9-4b81-8d7e-a8c0abf57d73", 00:30:47.390 "name": "lvs_n_0", 00:30:47.390 "base_bdev": "1826d481-8721-4646-b8a2-bb29cb683398", 00:30:47.390 "total_data_clusters": 5114, 00:30:47.390 "free_clusters": 5114, 00:30:47.390 "block_size": 512, 00:30:47.390 "cluster_size": 4194304 00:30:47.390 } 00:30:47.390 ]' 00:30:47.390 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="8a212ba7-1fe9-4b81-8d7e-a8c0abf57d73") .free_clusters' 00:30:47.390 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:47.390 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="8a212ba7-1fe9-4b81-8d7e-a8c0abf57d73") .cluster_size' 00:30:47.390 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:47.390 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:47.390 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:47.390 20456 00:30:47.390 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:47.390 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8a212ba7-1fe9-4b81-8d7e-a8c0abf57d73 lbd_nest_0 20456 00:30:47.648 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=38b311fa-baf9-4cf7-8193-1f11625a28ce 00:30:47.648 16:36:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:47.907 16:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:47.907 16:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 38b311fa-baf9-4cf7-8193-1f11625a28ce 00:30:48.165 16:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.423 16:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:48.423 16:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:48.423 16:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:48.423 16:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:48.423 16:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:00.632 Initializing NVMe Controllers 00:31:00.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:00.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:00.632 Initialization complete. Launching workers. 00:31:00.632 ======================================================== 00:31:00.632 Latency(us) 00:31:00.632 Device Information : IOPS MiB/s Average min max 00:31:00.632 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 46.09 0.02 21714.14 171.81 62041.86 00:31:00.632 ======================================================== 00:31:00.632 Total : 46.09 0.02 21714.14 171.81 62041.86 00:31:00.632 00:31:00.632 16:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:00.632 16:36:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:10.609 Initializing NVMe Controllers 00:31:10.609 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:10.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:10.609 Initialization complete. Launching workers. 00:31:10.609 ======================================================== 00:31:10.609 Latency(us) 00:31:10.609 Device Information : IOPS MiB/s Average min max 00:31:10.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 79.39 9.92 12605.80 4986.74 50890.00 00:31:10.609 ======================================================== 00:31:10.609 Total : 79.39 9.92 12605.80 4986.74 50890.00 00:31:10.609 00:31:10.609 16:36:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:10.609 16:36:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:10.609 16:36:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:20.584 Initializing NVMe Controllers 00:31:20.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:20.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:20.584 Initialization complete. Launching workers. 00:31:20.584 ======================================================== 00:31:20.584 Latency(us) 00:31:20.584 Device Information : IOPS MiB/s Average min max 00:31:20.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7380.91 3.60 4335.10 289.95 11206.84 00:31:20.584 ======================================================== 00:31:20.584 Total : 7380.91 3.60 4335.10 289.95 11206.84 00:31:20.584 00:31:20.584 16:37:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:20.584 16:37:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:30.557 Initializing NVMe Controllers 00:31:30.557 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:30.557 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:30.557 Initialization complete. Launching workers. 00:31:30.557 ======================================================== 00:31:30.557 Latency(us) 00:31:30.557 Device Information : IOPS MiB/s Average min max 00:31:30.557 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3928.73 491.09 8145.71 795.46 15742.32 00:31:30.557 ======================================================== 00:31:30.557 Total : 3928.73 491.09 8145.71 795.46 15742.32 00:31:30.557 00:31:30.557 16:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:30.557 16:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:30.557 16:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:40.533 Initializing NVMe Controllers 00:31:40.533 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:40.533 Controller IO queue size 128, less than required. 00:31:40.533 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:40.533 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:40.533 Initialization complete. Launching workers. 00:31:40.533 ======================================================== 00:31:40.533 Latency(us) 00:31:40.533 Device Information : IOPS MiB/s Average min max 00:31:40.533 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11866.18 5.79 10786.74 1714.58 24943.41 00:31:40.533 ======================================================== 00:31:40.533 Total : 11866.18 5.79 10786.74 1714.58 24943.41 00:31:40.533 00:31:40.533 16:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:40.533 16:37:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:52.738 Initializing NVMe Controllers 00:31:52.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:52.738 Controller IO queue size 128, less than required. 00:31:52.738 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:52.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:52.738 Initialization complete. Launching workers. 00:31:52.738 ======================================================== 00:31:52.738 Latency(us) 00:31:52.738 Device Information : IOPS MiB/s Average min max 00:31:52.738 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1167.78 145.97 110067.70 15909.49 230761.19 00:31:52.738 ======================================================== 00:31:52.738 Total : 1167.78 145.97 110067.70 15909.49 230761.19 00:31:52.738 00:31:52.738 16:37:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:52.738 16:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 38b311fa-baf9-4cf7-8193-1f11625a28ce 00:31:52.738 16:37:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1826d481-8721-4646-b8a2-bb29cb683398 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:52.738 rmmod nvme_tcp 00:31:52.738 rmmod nvme_fabrics 00:31:52.738 rmmod nvme_keyring 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 341768 ']' 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 341768 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 341768 ']' 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 341768 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 341768 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 341768' 00:31:52.738 killing process with pid 341768 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 341768 00:31:52.738 16:37:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 341768 00:31:54.642 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:54.642 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:54.642 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:54.642 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:54.642 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:54.642 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:54.642 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:54.642 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:54.642 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:54.642 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:54.643 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:54.643 16:37:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:56.551 00:31:56.551 real 1m32.124s 00:31:56.551 user 5m38.433s 00:31:56.551 sys 0m16.552s 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:56.551 ************************************ 00:31:56.551 END TEST nvmf_perf 00:31:56.551 ************************************ 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.551 ************************************ 00:31:56.551 START TEST nvmf_fio_host 00:31:56.551 ************************************ 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:56.551 * Looking for test storage... 00:31:56.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:56.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.551 --rc genhtml_branch_coverage=1 00:31:56.551 --rc genhtml_function_coverage=1 00:31:56.551 --rc genhtml_legend=1 00:31:56.551 --rc geninfo_all_blocks=1 00:31:56.551 --rc geninfo_unexecuted_blocks=1 00:31:56.551 00:31:56.551 ' 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:56.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.551 --rc genhtml_branch_coverage=1 00:31:56.551 --rc genhtml_function_coverage=1 00:31:56.551 --rc genhtml_legend=1 00:31:56.551 --rc geninfo_all_blocks=1 00:31:56.551 --rc geninfo_unexecuted_blocks=1 00:31:56.551 00:31:56.551 ' 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:56.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.551 --rc genhtml_branch_coverage=1 00:31:56.551 --rc genhtml_function_coverage=1 00:31:56.551 --rc genhtml_legend=1 00:31:56.551 --rc geninfo_all_blocks=1 00:31:56.551 --rc geninfo_unexecuted_blocks=1 00:31:56.551 00:31:56.551 ' 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:56.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:56.551 --rc genhtml_branch_coverage=1 00:31:56.551 --rc genhtml_function_coverage=1 00:31:56.551 --rc genhtml_legend=1 00:31:56.551 --rc geninfo_all_blocks=1 00:31:56.551 --rc geninfo_unexecuted_blocks=1 00:31:56.551 00:31:56.551 ' 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.551 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:56.552 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:56.552 16:37:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:59.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:59.088 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.088 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:59.089 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:59.089 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:59.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:59.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:31:59.089 00:31:59.089 --- 10.0.0.2 ping statistics --- 00:31:59.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.089 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:59.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:59.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:31:59.089 00:31:59.089 --- 10.0.0.1 ping statistics --- 00:31:59.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:59.089 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=353887 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 353887 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 353887 ']' 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:59.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:59.089 16:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.089 [2024-11-19 16:37:49.046022] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:31:59.089 [2024-11-19 16:37:49.046124] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:59.089 [2024-11-19 16:37:49.120695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:59.089 [2024-11-19 16:37:49.164789] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:59.089 [2024-11-19 16:37:49.164862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:59.089 [2024-11-19 16:37:49.164889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:59.089 [2024-11-19 16:37:49.164900] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:59.089 [2024-11-19 16:37:49.164909] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:59.089 [2024-11-19 16:37:49.166479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:59.089 [2024-11-19 16:37:49.166536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:59.089 [2024-11-19 16:37:49.166601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:59.089 [2024-11-19 16:37:49.166604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.089 16:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:59.089 16:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:59.089 16:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:59.345 [2024-11-19 16:37:49.527423] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:59.345 16:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:59.345 16:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:59.345 16:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:59.345 16:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:59.603 Malloc1 00:31:59.603 16:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:59.862 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:32:00.120 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.378 [2024-11-19 16:37:50.678857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.378 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:00.945 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:32:00.945 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:00.945 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:00.945 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:00.945 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:00.945 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:00.945 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.945 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:00.945 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:00.945 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.946 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.946 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:00.946 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:00.946 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:00.946 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:00.946 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.946 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.946 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:00.946 16:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:00.946 16:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:00.946 16:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:00.946 16:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:00.946 16:37:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:00.946 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:00.946 fio-3.35 00:32:00.946 Starting 1 thread 00:32:03.477 00:32:03.477 test: (groupid=0, jobs=1): err= 0: pid=354249: Tue Nov 19 16:37:53 2024 00:32:03.477 read: IOPS=8690, BW=33.9MiB/s (35.6MB/s)(68.1MiB/2007msec) 00:32:03.477 slat (nsec): min=1936, max=106583, avg=2387.00, stdev=1408.71 00:32:03.477 clat (usec): min=2263, max=12991, avg=8058.84, stdev=653.92 00:32:03.477 lat (usec): min=2286, max=12993, avg=8061.23, stdev=653.85 00:32:03.477 clat percentiles (usec): 00:32:03.477 | 1.00th=[ 6587], 5.00th=[ 7046], 10.00th=[ 7242], 20.00th=[ 7504], 00:32:03.477 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 8094], 60.00th=[ 8225], 00:32:03.477 | 70.00th=[ 8455], 80.00th=[ 8586], 90.00th=[ 8848], 95.00th=[ 8979], 00:32:03.477 | 99.00th=[ 9503], 99.50th=[ 9634], 99.90th=[10814], 99.95th=[12256], 00:32:03.477 | 99.99th=[12911] 00:32:03.477 bw ( KiB/s): min=33740, max=35384, per=99.92%, avg=34735.00, stdev=702.46, samples=4 00:32:03.477 iops : min= 8435, max= 8846, avg=8683.75, stdev=175.61, samples=4 00:32:03.477 write: IOPS=8684, BW=33.9MiB/s (35.6MB/s)(68.1MiB/2007msec); 0 zone resets 00:32:03.477 slat (nsec): min=2111, max=94025, avg=2533.62, stdev=1119.37 00:32:03.477 clat (usec): min=933, max=12275, avg=6626.04, stdev=549.54 00:32:03.477 lat (usec): min=939, max=12277, avg=6628.57, stdev=549.50 00:32:03.477 clat percentiles (usec): 00:32:03.477 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 5997], 20.00th=[ 6194], 00:32:03.477 | 30.00th=[ 6390], 40.00th=[ 6521], 50.00th=[ 6652], 60.00th=[ 6718], 00:32:03.477 | 70.00th=[ 6915], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7439], 00:32:03.477 | 99.00th=[ 7832], 99.50th=[ 7963], 99.90th=[10945], 99.95th=[11076], 00:32:03.477 | 99.99th=[12256] 00:32:03.477 bw ( KiB/s): min=34496, max=34944, per=99.95%, avg=34720.50, stdev=184.87, samples=4 00:32:03.477 iops : min= 8624, max= 8736, avg=8680.00, stdev=46.19, samples=4 00:32:03.477 lat (usec) : 1000=0.01% 00:32:03.477 lat (msec) : 2=0.02%, 4=0.11%, 10=99.70%, 20=0.16% 00:32:03.477 cpu : usr=63.01%, sys=35.44%, ctx=123, majf=0, minf=41 00:32:03.477 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:03.477 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.477 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:03.477 issued rwts: total=17442,17430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.477 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:03.477 00:32:03.477 Run status group 0 (all jobs): 00:32:03.477 READ: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2007-2007msec 00:32:03.477 WRITE: bw=33.9MiB/s (35.6MB/s), 33.9MiB/s-33.9MiB/s (35.6MB/s-35.6MB/s), io=68.1MiB (71.4MB), run=2007-2007msec 00:32:03.477 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:03.477 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:03.477 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:03.477 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:03.477 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:03.478 16:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:03.737 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:03.737 fio-3.35 00:32:03.737 Starting 1 thread 00:32:06.269 00:32:06.269 test: (groupid=0, jobs=1): err= 0: pid=354584: Tue Nov 19 16:37:56 2024 00:32:06.269 read: IOPS=8387, BW=131MiB/s (137MB/s)(263MiB/2006msec) 00:32:06.269 slat (nsec): min=2835, max=94849, avg=3687.60, stdev=1777.14 00:32:06.269 clat (usec): min=2041, max=16729, avg=8863.03, stdev=1972.87 00:32:06.269 lat (usec): min=2044, max=16733, avg=8866.71, stdev=1972.91 00:32:06.269 clat percentiles (usec): 00:32:06.269 | 1.00th=[ 4752], 5.00th=[ 5800], 10.00th=[ 6456], 20.00th=[ 7177], 00:32:06.269 | 30.00th=[ 7832], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9241], 00:32:06.269 | 70.00th=[ 9634], 80.00th=[10290], 90.00th=[11338], 95.00th=[12387], 00:32:06.269 | 99.00th=[14615], 99.50th=[15008], 99.90th=[16319], 99.95th=[16450], 00:32:06.269 | 99.99th=[16450] 00:32:06.269 bw ( KiB/s): min=60128, max=74144, per=49.66%, avg=66648.00, stdev=6320.92, samples=4 00:32:06.269 iops : min= 3758, max= 4634, avg=4165.50, stdev=395.06, samples=4 00:32:06.269 write: IOPS=4809, BW=75.1MiB/s (78.8MB/s)(137MiB/1822msec); 0 zone resets 00:32:06.269 slat (usec): min=30, max=194, avg=34.14, stdev= 6.09 00:32:06.269 clat (usec): min=5832, max=20738, avg=11691.73, stdev=1997.85 00:32:06.269 lat (usec): min=5863, max=20769, avg=11725.86, stdev=1998.11 00:32:06.269 clat percentiles (usec): 00:32:06.269 | 1.00th=[ 8029], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[ 9896], 00:32:06.269 | 30.00th=[10421], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:32:06.269 | 70.00th=[12649], 80.00th=[13566], 90.00th=[14353], 95.00th=[15008], 00:32:06.269 | 99.00th=[16909], 99.50th=[17695], 99.90th=[20055], 99.95th=[20579], 00:32:06.269 | 99.99th=[20841] 00:32:06.269 bw ( KiB/s): min=61760, max=77088, per=90.44%, avg=69592.00, stdev=7135.18, samples=4 00:32:06.269 iops : min= 3860, max= 4818, avg=4349.50, stdev=445.95, samples=4 00:32:06.269 lat (msec) : 4=0.13%, 10=57.57%, 20=42.26%, 50=0.04% 00:32:06.269 cpu : usr=76.77%, sys=21.88%, ctx=41, majf=0, minf=67 00:32:06.269 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:32:06.269 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:06.269 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:06.269 issued rwts: total=16826,8762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:06.269 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:06.269 00:32:06.269 Run status group 0 (all jobs): 00:32:06.269 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=263MiB (276MB), run=2006-2006msec 00:32:06.269 WRITE: bw=75.1MiB/s (78.8MB/s), 75.1MiB/s-75.1MiB/s (78.8MB/s-78.8MB/s), io=137MiB (144MB), run=1822-1822msec 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:32:06.269 16:37:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:32:09.555 Nvme0n1 00:32:09.555 16:37:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=73b72232-8f64-450a-a9b2-d56f366373c9 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 73b72232-8f64-450a-a9b2-d56f366373c9 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=73b72232-8f64-450a-a9b2-d56f366373c9 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:12.935 { 00:32:12.935 "uuid": "73b72232-8f64-450a-a9b2-d56f366373c9", 00:32:12.935 "name": "lvs_0", 00:32:12.935 "base_bdev": "Nvme0n1", 00:32:12.935 "total_data_clusters": 930, 00:32:12.935 "free_clusters": 930, 00:32:12.935 "block_size": 512, 00:32:12.935 "cluster_size": 1073741824 00:32:12.935 } 00:32:12.935 ]' 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="73b72232-8f64-450a-a9b2-d56f366373c9") .free_clusters' 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="73b72232-8f64-450a-a9b2-d56f366373c9") .cluster_size' 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:12.935 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:12.936 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:12.936 952320 00:32:12.936 16:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:13.214 3bebd6bc-6645-462a-9d6b-84c888917ddc 00:32:13.214 16:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:13.490 16:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:13.767 16:38:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:14.053 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:14.054 16:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:14.339 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:14.339 fio-3.35 00:32:14.339 Starting 1 thread 00:32:16.869 00:32:16.869 test: (groupid=0, jobs=1): err= 0: pid=355993: Tue Nov 19 16:38:06 2024 00:32:16.869 read: IOPS=5990, BW=23.4MiB/s (24.5MB/s)(47.0MiB/2007msec) 00:32:16.869 slat (usec): min=2, max=162, avg= 2.54, stdev= 2.31 00:32:16.869 clat (usec): min=974, max=171247, avg=11720.02, stdev=11652.68 00:32:16.869 lat (usec): min=978, max=171290, avg=11722.57, stdev=11653.06 00:32:16.869 clat percentiles (msec): 00:32:16.869 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:32:16.869 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:32:16.869 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:32:16.869 | 99.00th=[ 14], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:32:16.869 | 99.99th=[ 171] 00:32:16.869 bw ( KiB/s): min=17048, max=26368, per=99.62%, avg=23870.00, stdev=4551.96, samples=4 00:32:16.869 iops : min= 4262, max= 6592, avg=5967.50, stdev=1137.99, samples=4 00:32:16.869 write: IOPS=5970, BW=23.3MiB/s (24.5MB/s)(46.8MiB/2007msec); 0 zone resets 00:32:16.869 slat (usec): min=2, max=137, avg= 2.67, stdev= 1.69 00:32:16.869 clat (usec): min=358, max=169275, avg=9575.02, stdev=10938.33 00:32:16.869 lat (usec): min=362, max=169283, avg=9577.69, stdev=10938.76 00:32:16.869 clat percentiles (msec): 00:32:16.869 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:32:16.869 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:32:16.869 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:32:16.869 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:32:16.869 | 99.99th=[ 169] 00:32:16.869 bw ( KiB/s): min=18088, max=25928, per=100.00%, avg=23882.00, stdev=3865.62, samples=4 00:32:16.869 iops : min= 4522, max= 6482, avg=5970.50, stdev=966.40, samples=4 00:32:16.869 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:32:16.869 lat (msec) : 2=0.02%, 4=0.12%, 10=55.44%, 20=43.85%, 250=0.53% 00:32:16.869 cpu : usr=61.91%, sys=36.74%, ctx=89, majf=0, minf=41 00:32:16.869 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:32:16.869 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:16.869 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:16.869 issued rwts: total=12022,11982,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:16.869 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:16.869 00:32:16.869 Run status group 0 (all jobs): 00:32:16.869 READ: bw=23.4MiB/s (24.5MB/s), 23.4MiB/s-23.4MiB/s (24.5MB/s-24.5MB/s), io=47.0MiB (49.2MB), run=2007-2007msec 00:32:16.869 WRITE: bw=23.3MiB/s (24.5MB/s), 23.3MiB/s-23.3MiB/s (24.5MB/s-24.5MB/s), io=46.8MiB (49.1MB), run=2007-2007msec 00:32:16.870 16:38:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:16.870 16:38:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=d462ff2e-83bb-4bca-b277-cea3c612922b 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb d462ff2e-83bb-4bca-b277-cea3c612922b 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=d462ff2e-83bb-4bca-b277-cea3c612922b 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:18.245 { 00:32:18.245 "uuid": "73b72232-8f64-450a-a9b2-d56f366373c9", 00:32:18.245 "name": "lvs_0", 00:32:18.245 "base_bdev": "Nvme0n1", 00:32:18.245 "total_data_clusters": 930, 00:32:18.245 "free_clusters": 0, 00:32:18.245 "block_size": 512, 00:32:18.245 "cluster_size": 1073741824 00:32:18.245 }, 00:32:18.245 { 00:32:18.245 "uuid": "d462ff2e-83bb-4bca-b277-cea3c612922b", 00:32:18.245 "name": "lvs_n_0", 00:32:18.245 "base_bdev": "3bebd6bc-6645-462a-9d6b-84c888917ddc", 00:32:18.245 "total_data_clusters": 237847, 00:32:18.245 "free_clusters": 237847, 00:32:18.245 "block_size": 512, 00:32:18.245 "cluster_size": 4194304 00:32:18.245 } 00:32:18.245 ]' 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="d462ff2e-83bb-4bca-b277-cea3c612922b") .free_clusters' 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="d462ff2e-83bb-4bca-b277-cea3c612922b") .cluster_size' 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:18.245 951388 00:32:18.245 16:38:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:19.178 6220bcf1-8eef-4fb9-a9ce-b5c9a92defe9 00:32:19.178 16:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:19.178 16:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:19.436 16:38:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:19.693 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:19.693 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:19.693 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:19.693 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:19.952 16:38:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:19.952 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:19.952 fio-3.35 00:32:19.952 Starting 1 thread 00:32:22.480 00:32:22.480 test: (groupid=0, jobs=1): err= 0: pid=356731: Tue Nov 19 16:38:12 2024 00:32:22.480 read: IOPS=5730, BW=22.4MiB/s (23.5MB/s)(45.0MiB/2010msec) 00:32:22.480 slat (nsec): min=1905, max=158384, avg=2523.92, stdev=2427.50 00:32:22.480 clat (usec): min=4724, max=21453, avg=12214.20, stdev=1109.06 00:32:22.480 lat (usec): min=4732, max=21455, avg=12216.72, stdev=1108.94 00:32:22.480 clat percentiles (usec): 00:32:22.480 | 1.00th=[ 9765], 5.00th=[10421], 10.00th=[10945], 20.00th=[11338], 00:32:22.480 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12256], 60.00th=[12518], 00:32:22.480 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13566], 95.00th=[13960], 00:32:22.480 | 99.00th=[14615], 99.50th=[14877], 99.90th=[19268], 99.95th=[20841], 00:32:22.480 | 99.99th=[21365] 00:32:22.480 bw ( KiB/s): min=21624, max=23416, per=99.99%, avg=22920.00, stdev=866.39, samples=4 00:32:22.480 iops : min= 5406, max= 5852, avg=5730.00, stdev=216.57, samples=4 00:32:22.480 write: IOPS=5719, BW=22.3MiB/s (23.4MB/s)(44.9MiB/2010msec); 0 zone resets 00:32:22.480 slat (usec): min=2, max=135, avg= 2.64, stdev= 1.93 00:32:22.480 clat (usec): min=2321, max=20770, avg=10014.11, stdev=978.13 00:32:22.480 lat (usec): min=2328, max=20772, avg=10016.75, stdev=978.10 00:32:22.480 clat percentiles (usec): 00:32:22.480 | 1.00th=[ 7898], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9241], 00:32:22.480 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10159], 00:32:22.480 | 70.00th=[10421], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:32:22.480 | 99.00th=[12125], 99.50th=[12649], 99.90th=[17957], 99.95th=[19268], 00:32:22.480 | 99.99th=[20579] 00:32:22.480 bw ( KiB/s): min=22656, max=23168, per=99.90%, avg=22856.00, stdev=223.52, samples=4 00:32:22.480 iops : min= 5664, max= 5792, avg=5714.00, stdev=55.88, samples=4 00:32:22.480 lat (msec) : 4=0.05%, 10=26.00%, 20=73.91%, 50=0.04% 00:32:22.481 cpu : usr=60.38%, sys=38.23%, ctx=120, majf=0, minf=41 00:32:22.481 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:32:22.481 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:22.481 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:22.481 issued rwts: total=11518,11497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:22.481 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:22.481 00:32:22.481 Run status group 0 (all jobs): 00:32:22.481 READ: bw=22.4MiB/s (23.5MB/s), 22.4MiB/s-22.4MiB/s (23.5MB/s-23.5MB/s), io=45.0MiB (47.2MB), run=2010-2010msec 00:32:22.481 WRITE: bw=22.3MiB/s (23.4MB/s), 22.3MiB/s-22.3MiB/s (23.4MB/s-23.4MB/s), io=44.9MiB (47.1MB), run=2010-2010msec 00:32:22.481 16:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:22.739 16:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:22.739 16:38:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:26.928 16:38:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:26.928 16:38:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:30.218 16:38:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:30.218 16:38:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:32.125 rmmod nvme_tcp 00:32:32.125 rmmod nvme_fabrics 00:32:32.125 rmmod nvme_keyring 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 353887 ']' 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 353887 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 353887 ']' 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 353887 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353887 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353887' 00:32:32.125 killing process with pid 353887 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 353887 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 353887 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:32.125 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:32.384 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:32.384 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:32.384 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:32.384 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:32.384 16:38:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:34.291 00:32:34.291 real 0m37.904s 00:32:34.291 user 2m25.800s 00:32:34.291 sys 0m7.040s 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.291 ************************************ 00:32:34.291 END TEST nvmf_fio_host 00:32:34.291 ************************************ 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:34.291 ************************************ 00:32:34.291 START TEST nvmf_failover 00:32:34.291 ************************************ 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:34.291 * Looking for test storage... 00:32:34.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:32:34.291 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:34.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.550 --rc genhtml_branch_coverage=1 00:32:34.550 --rc genhtml_function_coverage=1 00:32:34.550 --rc genhtml_legend=1 00:32:34.550 --rc geninfo_all_blocks=1 00:32:34.550 --rc geninfo_unexecuted_blocks=1 00:32:34.550 00:32:34.550 ' 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:34.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.550 --rc genhtml_branch_coverage=1 00:32:34.550 --rc genhtml_function_coverage=1 00:32:34.550 --rc genhtml_legend=1 00:32:34.550 --rc geninfo_all_blocks=1 00:32:34.550 --rc geninfo_unexecuted_blocks=1 00:32:34.550 00:32:34.550 ' 00:32:34.550 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:34.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.550 --rc genhtml_branch_coverage=1 00:32:34.550 --rc genhtml_function_coverage=1 00:32:34.550 --rc genhtml_legend=1 00:32:34.550 --rc geninfo_all_blocks=1 00:32:34.550 --rc geninfo_unexecuted_blocks=1 00:32:34.551 00:32:34.551 ' 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:34.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:34.551 --rc genhtml_branch_coverage=1 00:32:34.551 --rc genhtml_function_coverage=1 00:32:34.551 --rc genhtml_legend=1 00:32:34.551 --rc geninfo_all_blocks=1 00:32:34.551 --rc geninfo_unexecuted_blocks=1 00:32:34.551 00:32:34.551 ' 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:34.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:34.551 16:38:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:36.454 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.454 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:36.455 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:36.455 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:36.455 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:36.455 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:36.713 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:36.713 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:36.713 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:36.713 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:36.713 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:36.713 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:36.713 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:36.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:36.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:32:36.714 00:32:36.714 --- 10.0.0.2 ping statistics --- 00:32:36.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.714 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:36.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:36.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:32:36.714 00:32:36.714 --- 10.0.0.1 ping statistics --- 00:32:36.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:36.714 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=359988 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 359988 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 359988 ']' 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:36.714 16:38:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:36.714 [2024-11-19 16:38:26.937815] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:32:36.714 [2024-11-19 16:38:26.937901] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.714 [2024-11-19 16:38:27.015165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:36.972 [2024-11-19 16:38:27.063793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.972 [2024-11-19 16:38:27.063841] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.972 [2024-11-19 16:38:27.063870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.972 [2024-11-19 16:38:27.063881] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.972 [2024-11-19 16:38:27.063890] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.972 [2024-11-19 16:38:27.065396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.972 [2024-11-19 16:38:27.065477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:36.972 [2024-11-19 16:38:27.065482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.972 16:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:36.972 16:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:36.972 16:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:36.972 16:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:36.972 16:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:36.972 16:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:36.972 16:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:37.230 [2024-11-19 16:38:27.442510] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.230 16:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:37.488 Malloc0 00:32:37.488 16:38:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:37.744 16:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:38.002 16:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:38.260 [2024-11-19 16:38:28.546288] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:38.260 16:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:38.518 [2024-11-19 16:38:28.815119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:38.518 16:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:38.776 [2024-11-19 16:38:29.075915] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:38.776 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=360275 00:32:38.776 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:38.776 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:38.776 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 360275 /var/tmp/bdevperf.sock 00:32:38.776 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 360275 ']' 00:32:38.776 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:38.776 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:38.776 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:38.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:38.776 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:38.776 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:39.359 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:39.359 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:39.359 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:39.618 NVMe0n1 00:32:39.618 16:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:39.878 00:32:39.878 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=360407 00:32:39.878 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:39.878 16:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:40.814 16:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:41.072 [2024-11-19 16:38:31.299886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.299948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.299978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.299990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.072 [2024-11-19 16:38:31.300233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.073 [2024-11-19 16:38:31.300245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.073 [2024-11-19 16:38:31.300256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc4060 is same with the state(6) to be set 00:32:41.073 16:38:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:44.366 16:38:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:44.366 00:32:44.366 16:38:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:44.932 16:38:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:48.222 16:38:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:48.222 [2024-11-19 16:38:38.284963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:48.222 16:38:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:49.159 16:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:49.419 [2024-11-19 16:38:39.586419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.419 [2024-11-19 16:38:39.586742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586802] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586904] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.586992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587141] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 [2024-11-19 16:38:39.587175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc64d0 is same with the state(6) to be set 00:32:49.420 16:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 360407 00:32:56.005 { 00:32:56.005 "results": [ 00:32:56.005 { 00:32:56.005 "job": "NVMe0n1", 00:32:56.005 "core_mask": "0x1", 00:32:56.005 "workload": "verify", 00:32:56.005 "status": "finished", 00:32:56.005 "verify_range": { 00:32:56.005 "start": 0, 00:32:56.005 "length": 16384 00:32:56.005 }, 00:32:56.005 "queue_depth": 128, 00:32:56.005 "io_size": 4096, 00:32:56.005 "runtime": 15.011884, 00:32:56.005 "iops": 8386.95529488504, 00:32:56.005 "mibps": 32.761544120644686, 00:32:56.005 "io_failed": 12373, 00:32:56.005 "io_timeout": 0, 00:32:56.005 "avg_latency_us": 13868.954502885912, 00:32:56.005 "min_latency_us": 546.1333333333333, 00:32:56.005 "max_latency_us": 18350.08 00:32:56.005 } 00:32:56.005 ], 00:32:56.005 "core_count": 1 00:32:56.005 } 00:32:56.005 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 360275 00:32:56.005 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 360275 ']' 00:32:56.005 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 360275 00:32:56.005 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:56.005 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.005 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 360275 00:32:56.005 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:56.006 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:56.006 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 360275' 00:32:56.006 killing process with pid 360275 00:32:56.006 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 360275 00:32:56.006 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 360275 00:32:56.006 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:56.006 [2024-11-19 16:38:29.141041] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:32:56.006 [2024-11-19 16:38:29.141158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360275 ] 00:32:56.006 [2024-11-19 16:38:29.211445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.006 [2024-11-19 16:38:29.259737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:56.006 Running I/O for 15 seconds... 00:32:56.006 8368.00 IOPS, 32.69 MiB/s [2024-11-19T15:38:46.345Z] [2024-11-19 16:38:31.300741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:77224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.006 [2024-11-19 16:38:31.300782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.300809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:77232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.006 [2024-11-19 16:38:31.300825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.300841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.006 [2024-11-19 16:38:31.300870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.300886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:77248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.006 [2024-11-19 16:38:31.300899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.300914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:77256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.006 [2024-11-19 16:38:31.300927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.300942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.006 [2024-11-19 16:38:31.300955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.300970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:77272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.006 [2024-11-19 16:38:31.300984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.300999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:77280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.006 [2024-11-19 16:38:31.301012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:77288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.006 [2024-11-19 16:38:31.301040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:77416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:77424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:77432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:77440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:77448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:77456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:77464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:77472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:77480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:77488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:77496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:77504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:77512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:77520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:77528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:77536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:77544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:77552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:77560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:77568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:77576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.006 [2024-11-19 16:38:31.301725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.006 [2024-11-19 16:38:31.301740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:77592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.301753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.301768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:77600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.301781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.301796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:77608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.301809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.301824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:77616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.301837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.301851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:77624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.301864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.301883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:77632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.301896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.301910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:77640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.301924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.301938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.301951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.301965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:77656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.301978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.301993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:77664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:77672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:77680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:77688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:77696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:77704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:77712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:77720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:77736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:77744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:77752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:77760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:77768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:77776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:77784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:77792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:77800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:77808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:77816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:77824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:77832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:77840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:77848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:77856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:77864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:77872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:77880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.007 [2024-11-19 16:38:31.302824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.007 [2024-11-19 16:38:31.302838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:77888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.008 [2024-11-19 16:38:31.302851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.302866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.008 [2024-11-19 16:38:31.302880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.302894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:77904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.008 [2024-11-19 16:38:31.302907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.302921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:77912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.008 [2024-11-19 16:38:31.302950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.302966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.008 [2024-11-19 16:38:31.302980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.302994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:77928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.008 [2024-11-19 16:38:31.303008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77936 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77944 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77952 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303202] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303214] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77960 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303256] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303267] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77968 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303315] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77976 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303351] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303362] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77984 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303408] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:77992 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303460] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78000 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303506] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78008 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303542] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303553] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78016 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303605] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78024 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303647] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303658] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78032 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303695] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78040 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78048 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78056 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303839] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78064 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78072 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303933] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.303954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78080 len:8 PRP1 0x0 PRP2 0x0 00:32:56.008 [2024-11-19 16:38:31.303966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.008 [2024-11-19 16:38:31.303984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.008 [2024-11-19 16:38:31.303995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.008 [2024-11-19 16:38:31.304006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78088 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78096 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78104 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304233] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304244] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78128 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304280] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78136 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304336] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78144 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304377] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304388] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78152 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304435] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78160 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304471] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304481] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78168 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304517] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78176 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304566] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78184 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304615] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304625] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78192 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78200 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304718] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78208 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304771] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78216 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304807] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304817] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78224 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78232 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304900] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77296 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.304962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.009 [2024-11-19 16:38:31.304973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77304 len:8 PRP1 0x0 PRP2 0x0 00:32:56.009 [2024-11-19 16:38:31.304985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.009 [2024-11-19 16:38:31.304998] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.009 [2024-11-19 16:38:31.305008] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77312 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305044] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77320 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305100] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77328 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305154] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305165] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77336 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305201] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77344 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305258] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78240 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305304] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77352 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305344] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305355] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77360 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305390] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77368 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77376 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305482] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77384 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305535] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305546] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77392 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77400 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.010 [2024-11-19 16:38:31.305638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.010 [2024-11-19 16:38:31.305649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:77408 len:8 PRP1 0x0 PRP2 0x0 00:32:56.010 [2024-11-19 16:38:31.305661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305723] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:56.010 [2024-11-19 16:38:31.305765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.010 [2024-11-19 16:38:31.305783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.010 [2024-11-19 16:38:31.305811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.010 [2024-11-19 16:38:31.305837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305850] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.010 [2024-11-19 16:38:31.305863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.010 [2024-11-19 16:38:31.305875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:56.010 [2024-11-19 16:38:31.309129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:56.010 [2024-11-19 16:38:31.309167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25283b0 (9): Bad file descriptor 00:32:56.010 [2024-11-19 16:38:31.460747] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:56.010 7813.50 IOPS, 30.52 MiB/s [2024-11-19T15:38:46.349Z] 8126.33 IOPS, 31.74 MiB/s [2024-11-19T15:38:46.349Z] 8228.25 IOPS, 32.14 MiB/s [2024-11-19T15:38:46.349Z] [2024-11-19 16:38:34.954146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.954930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-11-19 16:38:34.954958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.954973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.011 [2024-11-19 16:38:34.954986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.955001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.955014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.955028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.955042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.955088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.955111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.955127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.955141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.955156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.955169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.955185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.955203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.955219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.955233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.955247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.955262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.955277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.011 [2024-11-19 16:38:34.955290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.011 [2024-11-19 16:38:34.955305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:106088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:106096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.012 [2024-11-19 16:38:34.955916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.955943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.955975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.955990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.012 [2024-11-19 16:38:34.956491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.012 [2024-11-19 16:38:34.956504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:106104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.013 [2024-11-19 16:38:34.956657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.956981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.956995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:105504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.013 [2024-11-19 16:38:34.957566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:105520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.013 [2024-11-19 16:38:34.957579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:105528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:105544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:105560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:105576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:105584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:105600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.957978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.957992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.958007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.958020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.958035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:105648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.958048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.958062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:105656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:34.958085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.958101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2560450 is same with the state(6) to be set 00:32:56.014 [2024-11-19 16:38:34.958118] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.014 [2024-11-19 16:38:34.958129] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.014 [2024-11-19 16:38:34.958140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105664 len:8 PRP1 0x0 PRP2 0x0 00:32:56.014 [2024-11-19 16:38:34.958153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.958216] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:56.014 [2024-11-19 16:38:34.958253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.014 [2024-11-19 16:38:34.958271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.958287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.014 [2024-11-19 16:38:34.958300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.958314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.014 [2024-11-19 16:38:34.958331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.958345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.014 [2024-11-19 16:38:34.958358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:34.958375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:56.014 [2024-11-19 16:38:34.961672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:56.014 [2024-11-19 16:38:34.961710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25283b0 (9): Bad file descriptor 00:32:56.014 [2024-11-19 16:38:34.985564] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:56.014 8206.40 IOPS, 32.06 MiB/s [2024-11-19T15:38:46.353Z] 8258.33 IOPS, 32.26 MiB/s [2024-11-19T15:38:46.353Z] 8301.14 IOPS, 32.43 MiB/s [2024-11-19T15:38:46.353Z] 8353.75 IOPS, 32.63 MiB/s [2024-11-19T15:38:46.353Z] 8387.33 IOPS, 32.76 MiB/s [2024-11-19T15:38:46.353Z] [2024-11-19 16:38:39.587274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:35984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:36016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.014 [2024-11-19 16:38:39.587668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.014 [2024-11-19 16:38:39.587683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:36048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:36096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:36104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:36120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.587979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.587994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:36136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.015 [2024-11-19 16:38:39.588668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.015 [2024-11-19 16:38:39.588682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.016 [2024-11-19 16:38:39.588709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.016 [2024-11-19 16:38:39.588741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:36344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.588769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:36352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.588797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.588825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:36368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.588852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:36376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.588880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:36384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.588908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.588935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.588963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.588977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.588990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:36440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:36456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:56.016 [2024-11-19 16:38:39.589216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:36488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:36496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:36576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:36608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.016 [2024-11-19 16:38:39.589824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.016 [2024-11-19 16:38:39.589842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.589856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.589871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.589885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.589899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.589912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.589926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:36656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.589939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.589953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.589967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.589981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:36672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.589994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:36688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:36736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:36744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:36768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:36824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:36832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:36880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:36896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:36904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.590982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.017 [2024-11-19 16:38:39.590996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:36944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.017 [2024-11-19 16:38:39.591010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.018 [2024-11-19 16:38:39.591024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:36952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.018 [2024-11-19 16:38:39.591037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.018 [2024-11-19 16:38:39.591052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:36960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:56.018 [2024-11-19 16:38:39.591065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.018 [2024-11-19 16:38:39.591117] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:56.018 [2024-11-19 16:38:39.591134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:56.018 [2024-11-19 16:38:39.591146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:36968 len:8 PRP1 0x0 PRP2 0x0 00:32:56.018 [2024-11-19 16:38:39.591159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.018 [2024-11-19 16:38:39.591222] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:56.018 [2024-11-19 16:38:39.591259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.018 [2024-11-19 16:38:39.591277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.018 [2024-11-19 16:38:39.591291] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.018 [2024-11-19 16:38:39.591305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.018 [2024-11-19 16:38:39.591319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.018 [2024-11-19 16:38:39.591331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.018 [2024-11-19 16:38:39.591345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:56.018 [2024-11-19 16:38:39.591358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:56.018 [2024-11-19 16:38:39.591371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:56.018 [2024-11-19 16:38:39.594592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:56.018 [2024-11-19 16:38:39.594632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x25283b0 (9): Bad file descriptor 00:32:56.018 [2024-11-19 16:38:39.713051] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:56.018 8292.40 IOPS, 32.39 MiB/s [2024-11-19T15:38:46.357Z] 8316.00 IOPS, 32.48 MiB/s [2024-11-19T15:38:46.357Z] 8332.67 IOPS, 32.55 MiB/s [2024-11-19T15:38:46.357Z] 8350.00 IOPS, 32.62 MiB/s [2024-11-19T15:38:46.357Z] 8370.43 IOPS, 32.70 MiB/s [2024-11-19T15:38:46.357Z] 8385.27 IOPS, 32.75 MiB/s 00:32:56.018 Latency(us) 00:32:56.018 [2024-11-19T15:38:46.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.018 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:56.018 Verification LBA range: start 0x0 length 0x4000 00:32:56.018 NVMe0n1 : 15.01 8386.96 32.76 824.21 0.00 13868.95 546.13 18350.08 00:32:56.018 [2024-11-19T15:38:46.357Z] =================================================================================================================== 00:32:56.018 [2024-11-19T15:38:46.357Z] Total : 8386.96 32.76 824.21 0.00 13868.95 546.13 18350.08 00:32:56.018 Received shutdown signal, test time was about 15.000000 seconds 00:32:56.018 00:32:56.018 Latency(us) 00:32:56.018 [2024-11-19T15:38:46.357Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:56.018 [2024-11-19T15:38:46.357Z] =================================================================================================================== 00:32:56.018 [2024-11-19T15:38:46.357Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=362244 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 362244 /var/tmp/bdevperf.sock 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 362244 ']' 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:56.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:56.018 [2024-11-19 16:38:45.946973] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:56.018 16:38:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:56.018 [2024-11-19 16:38:46.211746] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:56.018 16:38:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:56.277 NVMe0n1 00:32:56.536 16:38:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:56.794 00:32:56.794 16:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:57.359 00:32:57.359 16:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:57.359 16:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:57.616 16:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:57.875 16:38:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:33:01.159 16:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:01.159 16:38:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:33:01.159 16:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=362907 00:33:01.159 16:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:01.159 16:38:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 362907 00:33:02.098 { 00:33:02.098 "results": [ 00:33:02.098 { 00:33:02.098 "job": "NVMe0n1", 00:33:02.098 "core_mask": "0x1", 00:33:02.098 "workload": "verify", 00:33:02.098 "status": "finished", 00:33:02.098 "verify_range": { 00:33:02.098 "start": 0, 00:33:02.098 "length": 16384 00:33:02.098 }, 00:33:02.098 "queue_depth": 128, 00:33:02.098 "io_size": 4096, 00:33:02.098 "runtime": 1.004576, 00:33:02.098 "iops": 8182.556620902749, 00:33:02.098 "mibps": 31.963111800401364, 00:33:02.098 "io_failed": 0, 00:33:02.098 "io_timeout": 0, 00:33:02.098 "avg_latency_us": 15578.513128953773, 00:33:02.098 "min_latency_us": 2184.5333333333333, 00:33:02.098 "max_latency_us": 13204.29037037037 00:33:02.098 } 00:33:02.098 ], 00:33:02.098 "core_count": 1 00:33:02.098 } 00:33:02.098 16:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:02.098 [2024-11-19 16:38:45.470442] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:33:02.098 [2024-11-19 16:38:45.470530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362244 ] 00:33:02.098 [2024-11-19 16:38:45.542999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.098 [2024-11-19 16:38:45.586861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.098 [2024-11-19 16:38:47.953235] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:02.098 [2024-11-19 16:38:47.953327] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.098 [2024-11-19 16:38:47.953350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.098 [2024-11-19 16:38:47.953366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.098 [2024-11-19 16:38:47.953380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.098 [2024-11-19 16:38:47.953395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.098 [2024-11-19 16:38:47.953417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.098 [2024-11-19 16:38:47.953431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:02.098 [2024-11-19 16:38:47.953445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:02.098 [2024-11-19 16:38:47.953460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:02.098 [2024-11-19 16:38:47.953505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:02.098 [2024-11-19 16:38:47.953542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1df83b0 (9): Bad file descriptor 00:33:02.098 [2024-11-19 16:38:47.964115] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:02.098 Running I/O for 1 seconds... 00:33:02.098 8092.00 IOPS, 31.61 MiB/s 00:33:02.098 Latency(us) 00:33:02.098 [2024-11-19T15:38:52.437Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:02.098 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:02.098 Verification LBA range: start 0x0 length 0x4000 00:33:02.098 NVMe0n1 : 1.00 8182.56 31.96 0.00 0.00 15578.51 2184.53 13204.29 00:33:02.098 [2024-11-19T15:38:52.437Z] =================================================================================================================== 00:33:02.098 [2024-11-19T15:38:52.437Z] Total : 8182.56 31.96 0.00 0.00 15578.51 2184.53 13204.29 00:33:02.098 16:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:02.098 16:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:02.357 16:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:02.924 16:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:02.924 16:38:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:02.924 16:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:03.492 16:38:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 362244 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 362244 ']' 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 362244 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362244 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362244' 00:33:06.783 killing process with pid 362244 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 362244 00:33:06.783 16:38:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 362244 00:33:06.783 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:06.783 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:07.041 rmmod nvme_tcp 00:33:07.041 rmmod nvme_fabrics 00:33:07.041 rmmod nvme_keyring 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 359988 ']' 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 359988 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 359988 ']' 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 359988 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:07.041 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 359988 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 359988' 00:33:07.299 killing process with pid 359988 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 359988 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 359988 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.299 16:38:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:09.839 00:33:09.839 real 0m35.104s 00:33:09.839 user 2m4.175s 00:33:09.839 sys 0m5.815s 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:09.839 ************************************ 00:33:09.839 END TEST nvmf_failover 00:33:09.839 ************************************ 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.839 ************************************ 00:33:09.839 START TEST nvmf_host_discovery 00:33:09.839 ************************************ 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:09.839 * Looking for test storage... 00:33:09.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:09.839 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:09.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.839 --rc genhtml_branch_coverage=1 00:33:09.839 --rc genhtml_function_coverage=1 00:33:09.839 --rc genhtml_legend=1 00:33:09.839 --rc geninfo_all_blocks=1 00:33:09.839 --rc geninfo_unexecuted_blocks=1 00:33:09.839 00:33:09.840 ' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:09.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.840 --rc genhtml_branch_coverage=1 00:33:09.840 --rc genhtml_function_coverage=1 00:33:09.840 --rc genhtml_legend=1 00:33:09.840 --rc geninfo_all_blocks=1 00:33:09.840 --rc geninfo_unexecuted_blocks=1 00:33:09.840 00:33:09.840 ' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:09.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.840 --rc genhtml_branch_coverage=1 00:33:09.840 --rc genhtml_function_coverage=1 00:33:09.840 --rc genhtml_legend=1 00:33:09.840 --rc geninfo_all_blocks=1 00:33:09.840 --rc geninfo_unexecuted_blocks=1 00:33:09.840 00:33:09.840 ' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:09.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:09.840 --rc genhtml_branch_coverage=1 00:33:09.840 --rc genhtml_function_coverage=1 00:33:09.840 --rc genhtml_legend=1 00:33:09.840 --rc geninfo_all_blocks=1 00:33:09.840 --rc geninfo_unexecuted_blocks=1 00:33:09.840 00:33:09.840 ' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:09.840 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:09.840 16:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:11.744 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:11.744 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:11.744 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:11.744 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:11.745 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:11.745 16:39:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:12.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:12.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:33:12.006 00:33:12.006 --- 10.0.0.2 ping statistics --- 00:33:12.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.006 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:12.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:12.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:33:12.006 00:33:12.006 --- 10.0.0.1 ping statistics --- 00:33:12.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:12.006 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=365748 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 365748 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 365748 ']' 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:12.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.006 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.006 [2024-11-19 16:39:02.281865] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:33:12.006 [2024-11-19 16:39:02.281969] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:12.266 [2024-11-19 16:39:02.356798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.266 [2024-11-19 16:39:02.405036] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:12.266 [2024-11-19 16:39:02.405121] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:12.266 [2024-11-19 16:39:02.405173] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:12.266 [2024-11-19 16:39:02.405187] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:12.266 [2024-11-19 16:39:02.405198] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:12.266 [2024-11-19 16:39:02.405865] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.266 [2024-11-19 16:39:02.555695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.266 [2024-11-19 16:39:02.563903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.266 null0 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.266 null1 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=365769 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 365769 /tmp/host.sock 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 365769 ']' 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:12.266 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:12.266 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.526 [2024-11-19 16:39:02.642202] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:33:12.527 [2024-11-19 16:39:02.642280] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid365769 ] 00:33:12.527 [2024-11-19 16:39:02.708849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:12.527 [2024-11-19 16:39:02.754961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:12.785 16:39:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:12.785 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:12.786 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.045 [2024-11-19 16:39:03.141436] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.045 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:13.046 16:39:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:13.615 [2024-11-19 16:39:03.922843] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:13.615 [2024-11-19 16:39:03.922879] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:13.615 [2024-11-19 16:39:03.922900] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:13.874 [2024-11-19 16:39:04.010190] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:13.874 [2024-11-19 16:39:04.193330] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:13.874 [2024-11-19 16:39:04.194307] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x146e1b0:1 started. 00:33:13.874 [2024-11-19 16:39:04.196156] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:13.874 [2024-11-19 16:39:04.196177] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:13.874 [2024-11-19 16:39:04.201544] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x146e1b0 was disconnected and freed. delete nvme_qpair. 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:14.134 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:14.135 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:14.135 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.135 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.135 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:14.403 [2024-11-19 16:39:04.496147] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x146e900:1 started. 00:33:14.403 [2024-11-19 16:39:04.502088] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x146e900 was disconnected and freed. delete nvme_qpair. 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.403 [2024-11-19 16:39:04.577975] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:14.403 [2024-11-19 16:39:04.578729] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:14.403 [2024-11-19 16:39:04.578772] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:14.403 [2024-11-19 16:39:04.664510] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:14.403 16:39:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:14.662 [2024-11-19 16:39:04.767346] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:14.662 [2024-11-19 16:39:04.767411] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:14.662 [2024-11-19 16:39:04.767427] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:14.662 [2024-11-19 16:39:04.767435] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.602 [2024-11-19 16:39:05.802634] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:15.602 [2024-11-19 16:39:05.802678] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.602 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:15.603 [2024-11-19 16:39:05.808006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.603 [2024-11-19 16:39:05.808046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.603 [2024-11-19 16:39:05.808096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.603 [2024-11-19 16:39:05.808123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.603 [2024-11-19 16:39:05.808137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.603 [2024-11-19 16:39:05.808151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.603 [2024-11-19 16:39:05.808165] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:15.603 [2024-11-19 16:39:05.808179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:15.603 [2024-11-19 16:39:05.808192] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14401f0 is same with the state(6) to be set 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:15.603 [2024-11-19 16:39:05.817999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14401f0 (9): Bad file descriptor 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.603 [2024-11-19 16:39:05.828037] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:15.603 [2024-11-19 16:39:05.828060] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:15.603 [2024-11-19 16:39:05.828078] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:15.603 [2024-11-19 16:39:05.828087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:15.603 [2024-11-19 16:39:05.828137] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:15.603 [2024-11-19 16:39:05.828346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.603 [2024-11-19 16:39:05.828376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14401f0 with addr=10.0.0.2, port=4420 00:33:15.603 [2024-11-19 16:39:05.828393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14401f0 is same with the state(6) to be set 00:33:15.603 [2024-11-19 16:39:05.828417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14401f0 (9): Bad file descriptor 00:33:15.603 [2024-11-19 16:39:05.828438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:15.603 [2024-11-19 16:39:05.828452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:15.603 [2024-11-19 16:39:05.828468] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:15.603 [2024-11-19 16:39:05.828481] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:15.603 [2024-11-19 16:39:05.828492] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:15.603 [2024-11-19 16:39:05.828501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:15.603 [2024-11-19 16:39:05.838169] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:15.603 [2024-11-19 16:39:05.838190] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:15.603 [2024-11-19 16:39:05.838199] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:15.603 [2024-11-19 16:39:05.838206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:15.603 [2024-11-19 16:39:05.838245] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:15.603 [2024-11-19 16:39:05.838375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.603 [2024-11-19 16:39:05.838403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14401f0 with addr=10.0.0.2, port=4420 00:33:15.603 [2024-11-19 16:39:05.838419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14401f0 is same with the state(6) to be set 00:33:15.603 [2024-11-19 16:39:05.838441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14401f0 (9): Bad file descriptor 00:33:15.603 [2024-11-19 16:39:05.838470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:15.603 [2024-11-19 16:39:05.838484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:15.603 [2024-11-19 16:39:05.838497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:15.603 [2024-11-19 16:39:05.838509] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:15.603 [2024-11-19 16:39:05.838518] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:15.603 [2024-11-19 16:39:05.838525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:15.603 [2024-11-19 16:39:05.848280] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:15.603 [2024-11-19 16:39:05.848304] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:15.603 [2024-11-19 16:39:05.848313] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:15.603 [2024-11-19 16:39:05.848321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:15.603 [2024-11-19 16:39:05.848360] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:15.603 [2024-11-19 16:39:05.848499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.603 [2024-11-19 16:39:05.848530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14401f0 with addr=10.0.0.2, port=4420 00:33:15.603 [2024-11-19 16:39:05.848546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14401f0 is same with the state(6) to be set 00:33:15.603 [2024-11-19 16:39:05.848567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14401f0 (9): Bad file descriptor 00:33:15.603 [2024-11-19 16:39:05.848588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:15.603 [2024-11-19 16:39:05.848601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:15.603 [2024-11-19 16:39:05.848614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.603 [2024-11-19 16:39:05.848626] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:15.603 [2024-11-19 16:39:05.848639] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:15.603 [2024-11-19 16:39:05.848647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.603 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:15.603 [2024-11-19 16:39:05.858395] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:15.603 [2024-11-19 16:39:05.858418] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:15.603 [2024-11-19 16:39:05.858437] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:15.603 [2024-11-19 16:39:05.858444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:15.603 [2024-11-19 16:39:05.858485] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:15.603 [2024-11-19 16:39:05.858637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.603 [2024-11-19 16:39:05.858665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14401f0 with addr=10.0.0.2, port=4420 00:33:15.603 [2024-11-19 16:39:05.858681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14401f0 is same with the state(6) to be set 00:33:15.603 [2024-11-19 16:39:05.858703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14401f0 (9): Bad file descriptor 00:33:15.603 [2024-11-19 16:39:05.858724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:15.603 [2024-11-19 16:39:05.858737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:15.603 [2024-11-19 16:39:05.858751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:15.603 [2024-11-19 16:39:05.858763] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:15.604 [2024-11-19 16:39:05.858772] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:15.604 [2024-11-19 16:39:05.858779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:15.604 [2024-11-19 16:39:05.868520] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:15.604 [2024-11-19 16:39:05.868542] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:15.604 [2024-11-19 16:39:05.868552] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:15.604 [2024-11-19 16:39:05.868560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:15.604 [2024-11-19 16:39:05.868600] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:15.604 [2024-11-19 16:39:05.868703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.604 [2024-11-19 16:39:05.868731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14401f0 with addr=10.0.0.2, port=4420 00:33:15.604 [2024-11-19 16:39:05.868747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14401f0 is same with the state(6) to be set 00:33:15.604 [2024-11-19 16:39:05.868769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14401f0 (9): Bad file descriptor 00:33:15.604 [2024-11-19 16:39:05.868789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:15.604 [2024-11-19 16:39:05.868802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:15.604 [2024-11-19 16:39:05.868821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:15.604 [2024-11-19 16:39:05.868834] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:15.604 [2024-11-19 16:39:05.868843] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:15.604 [2024-11-19 16:39:05.868850] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:15.604 [2024-11-19 16:39:05.878635] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:15.604 [2024-11-19 16:39:05.878656] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:15.604 [2024-11-19 16:39:05.878665] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:15.604 [2024-11-19 16:39:05.878673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:15.604 [2024-11-19 16:39:05.878712] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:15.604 [2024-11-19 16:39:05.878856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.604 [2024-11-19 16:39:05.878883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14401f0 with addr=10.0.0.2, port=4420 00:33:15.604 [2024-11-19 16:39:05.878898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14401f0 is same with the state(6) to be set 00:33:15.604 [2024-11-19 16:39:05.878920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14401f0 (9): Bad file descriptor 00:33:15.604 [2024-11-19 16:39:05.878939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:15.604 [2024-11-19 16:39:05.878953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:15.604 [2024-11-19 16:39:05.878966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:15.604 [2024-11-19 16:39:05.878977] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:15.604 [2024-11-19 16:39:05.878986] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:15.604 [2024-11-19 16:39:05.878994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.604 [2024-11-19 16:39:05.888745] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:15.604 [2024-11-19 16:39:05.888767] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:15.604 [2024-11-19 16:39:05.888776] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:15.604 [2024-11-19 16:39:05.888783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:15.604 [2024-11-19 16:39:05.888822] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:15.604 [2024-11-19 16:39:05.888955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:15.604 [2024-11-19 16:39:05.888982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14401f0 with addr=10.0.0.2, port=4420 00:33:15.604 [2024-11-19 16:39:05.888998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14401f0 is same with the state(6) to be set 00:33:15.604 [2024-11-19 16:39:05.889020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14401f0 (9): Bad file descriptor 00:33:15.604 [2024-11-19 16:39:05.889040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:15.604 [2024-11-19 16:39:05.889059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:15.604 [2024-11-19 16:39:05.889082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:15.604 [2024-11-19 16:39:05.889096] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:15.604 [2024-11-19 16:39:05.889106] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:15.604 [2024-11-19 16:39:05.889113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:15.604 [2024-11-19 16:39:05.889164] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:15.604 [2024-11-19 16:39:05.889200] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.604 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.865 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.865 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:15.865 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:15.865 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:15.865 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.865 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:15.865 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.865 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.865 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:15.866 16:39:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.866 16:39:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.805 [2024-11-19 16:39:07.121081] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:16.805 [2024-11-19 16:39:07.121136] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:16.805 [2024-11-19 16:39:07.121159] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:17.063 [2024-11-19 16:39:07.207436] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:17.322 [2024-11-19 16:39:07.428059] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:17.322 [2024-11-19 16:39:07.428915] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x143ba00:1 started. 00:33:17.322 [2024-11-19 16:39:07.431015] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:17.322 [2024-11-19 16:39:07.431054] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.322 [2024-11-19 16:39:07.440441] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x143ba00 was disconnected and freed. delete nvme_qpair. 00:33:17.322 request: 00:33:17.322 { 00:33:17.322 "name": "nvme", 00:33:17.322 "trtype": "tcp", 00:33:17.322 "traddr": "10.0.0.2", 00:33:17.322 "adrfam": "ipv4", 00:33:17.322 "trsvcid": "8009", 00:33:17.322 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:17.322 "wait_for_attach": true, 00:33:17.322 "method": "bdev_nvme_start_discovery", 00:33:17.322 "req_id": 1 00:33:17.322 } 00:33:17.322 Got JSON-RPC error response 00:33:17.322 response: 00:33:17.322 { 00:33:17.322 "code": -17, 00:33:17.322 "message": "File exists" 00:33:17.322 } 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:17.322 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.323 request: 00:33:17.323 { 00:33:17.323 "name": "nvme_second", 00:33:17.323 "trtype": "tcp", 00:33:17.323 "traddr": "10.0.0.2", 00:33:17.323 "adrfam": "ipv4", 00:33:17.323 "trsvcid": "8009", 00:33:17.323 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:17.323 "wait_for_attach": true, 00:33:17.323 "method": "bdev_nvme_start_discovery", 00:33:17.323 "req_id": 1 00:33:17.323 } 00:33:17.323 Got JSON-RPC error response 00:33:17.323 response: 00:33:17.323 { 00:33:17.323 "code": -17, 00:33:17.323 "message": "File exists" 00:33:17.323 } 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.323 16:39:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.701 [2024-11-19 16:39:08.618454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:18.701 [2024-11-19 16:39:08.618524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143de00 with addr=10.0.0.2, port=8010 00:33:18.701 [2024-11-19 16:39:08.618553] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:18.701 [2024-11-19 16:39:08.618567] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:18.701 [2024-11-19 16:39:08.618579] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:19.635 [2024-11-19 16:39:09.620823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:19.635 [2024-11-19 16:39:09.620876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x143de00 with addr=10.0.0.2, port=8010 00:33:19.635 [2024-11-19 16:39:09.620897] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:19.635 [2024-11-19 16:39:09.620910] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:19.635 [2024-11-19 16:39:09.620921] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:20.576 [2024-11-19 16:39:10.623090] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:20.576 request: 00:33:20.576 { 00:33:20.576 "name": "nvme_second", 00:33:20.576 "trtype": "tcp", 00:33:20.576 "traddr": "10.0.0.2", 00:33:20.576 "adrfam": "ipv4", 00:33:20.576 "trsvcid": "8010", 00:33:20.576 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:20.576 "wait_for_attach": false, 00:33:20.576 "attach_timeout_ms": 3000, 00:33:20.576 "method": "bdev_nvme_start_discovery", 00:33:20.576 "req_id": 1 00:33:20.576 } 00:33:20.576 Got JSON-RPC error response 00:33:20.576 response: 00:33:20.576 { 00:33:20.576 "code": -110, 00:33:20.576 "message": "Connection timed out" 00:33:20.576 } 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 365769 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:20.576 rmmod nvme_tcp 00:33:20.576 rmmod nvme_fabrics 00:33:20.576 rmmod nvme_keyring 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 365748 ']' 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 365748 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 365748 ']' 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 365748 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 365748 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:20.576 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 365748' 00:33:20.576 killing process with pid 365748 00:33:20.577 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 365748 00:33:20.577 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 365748 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:20.837 16:39:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:22.748 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:22.748 00:33:22.748 real 0m13.329s 00:33:22.748 user 0m18.889s 00:33:22.748 sys 0m2.862s 00:33:22.748 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:22.748 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.748 ************************************ 00:33:22.748 END TEST nvmf_host_discovery 00:33:22.748 ************************************ 00:33:22.748 16:39:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:22.748 16:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:22.748 16:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:22.748 16:39:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:23.007 ************************************ 00:33:23.007 START TEST nvmf_host_multipath_status 00:33:23.007 ************************************ 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:23.007 * Looking for test storage... 00:33:23.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:23.007 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:23.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.008 --rc genhtml_branch_coverage=1 00:33:23.008 --rc genhtml_function_coverage=1 00:33:23.008 --rc genhtml_legend=1 00:33:23.008 --rc geninfo_all_blocks=1 00:33:23.008 --rc geninfo_unexecuted_blocks=1 00:33:23.008 00:33:23.008 ' 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:23.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.008 --rc genhtml_branch_coverage=1 00:33:23.008 --rc genhtml_function_coverage=1 00:33:23.008 --rc genhtml_legend=1 00:33:23.008 --rc geninfo_all_blocks=1 00:33:23.008 --rc geninfo_unexecuted_blocks=1 00:33:23.008 00:33:23.008 ' 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:23.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.008 --rc genhtml_branch_coverage=1 00:33:23.008 --rc genhtml_function_coverage=1 00:33:23.008 --rc genhtml_legend=1 00:33:23.008 --rc geninfo_all_blocks=1 00:33:23.008 --rc geninfo_unexecuted_blocks=1 00:33:23.008 00:33:23.008 ' 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:23.008 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:23.008 --rc genhtml_branch_coverage=1 00:33:23.008 --rc genhtml_function_coverage=1 00:33:23.008 --rc genhtml_legend=1 00:33:23.008 --rc geninfo_all_blocks=1 00:33:23.008 --rc geninfo_unexecuted_blocks=1 00:33:23.008 00:33:23.008 ' 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.008 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:23.009 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:23.009 16:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:25.544 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:25.544 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:25.544 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:25.544 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:25.544 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:25.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:25.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:33:25.545 00:33:25.545 --- 10.0.0.2 ping statistics --- 00:33:25.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.545 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:25.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:25.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:33:25.545 00:33:25.545 --- 10.0.0.1 ping statistics --- 00:33:25.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:25.545 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=369315 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 369315 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 369315 ']' 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:25.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:25.545 [2024-11-19 16:39:15.465532] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:33:25.545 [2024-11-19 16:39:15.465609] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:25.545 [2024-11-19 16:39:15.537619] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:25.545 [2024-11-19 16:39:15.581951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:25.545 [2024-11-19 16:39:15.582005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:25.545 [2024-11-19 16:39:15.582033] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:25.545 [2024-11-19 16:39:15.582044] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:25.545 [2024-11-19 16:39:15.582053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:25.545 [2024-11-19 16:39:15.583498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.545 [2024-11-19 16:39:15.583504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=369315 00:33:25.545 16:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:25.804 [2024-11-19 16:39:15.988126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:25.804 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:26.062 Malloc0 00:33:26.062 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:26.321 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:26.579 16:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:26.837 [2024-11-19 16:39:17.118250] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.837 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:27.404 [2024-11-19 16:39:17.443065] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=369596 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 369596 /var/tmp/bdevperf.sock 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 369596 ']' 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:27.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:27.405 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:27.664 16:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:28.232 Nvme0n1 00:33:28.232 16:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:28.800 Nvme0n1 00:33:28.800 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:28.800 16:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:30.702 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:30.702 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:31.267 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:31.525 16:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:32.457 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:32.457 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:32.457 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.458 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:32.715 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:32.715 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:32.715 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.715 16:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:32.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:32.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:32.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:32.973 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:33.231 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.231 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:33.231 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.231 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:33.489 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.489 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:33.489 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.489 16:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:33.748 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.748 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:33.748 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.748 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:34.005 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.005 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:34.005 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:34.262 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:34.520 16:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:35.901 16:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:35.901 16:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:35.901 16:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.901 16:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:35.901 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:35.901 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:35.901 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:35.901 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:36.159 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.159 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:36.159 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.159 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:36.417 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.417 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:36.417 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.417 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:36.675 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.675 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:36.675 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.675 16:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:36.933 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.933 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:36.933 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.933 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:37.501 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.501 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:37.501 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:37.501 16:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:37.761 16:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:39.137 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:39.137 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:39.137 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.137 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:39.137 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.137 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:39.137 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.137 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:39.394 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:39.394 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:39.394 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.394 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:39.652 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.652 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:39.652 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.652 16:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:39.910 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.910 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:39.910 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.910 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:40.168 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.168 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:40.168 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.168 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:40.736 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.736 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:40.736 16:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:40.736 16:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:40.995 16:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:42.375 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:42.375 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:42.375 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.375 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:42.375 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.375 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:42.375 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.375 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:42.633 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:42.633 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:42.633 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.633 16:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:42.891 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.891 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:42.891 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.891 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:43.149 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.149 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:43.149 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.149 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:43.406 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.406 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:43.406 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.406 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:43.664 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:43.664 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:43.664 16:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:44.234 16:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:44.234 16:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:45.611 16:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:45.611 16:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:45.612 16:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.612 16:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:45.612 16:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:45.612 16:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:45.612 16:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.612 16:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:45.870 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:45.870 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:45.870 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.870 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:46.128 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.128 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:46.128 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.128 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:46.386 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.386 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:46.386 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.386 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:46.645 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:46.645 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:46.645 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.645 16:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:46.903 16:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:46.903 16:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:46.903 16:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:47.162 16:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:47.421 16:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:48.360 16:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:48.360 16:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:48.620 16:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.620 16:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:48.879 16:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.879 16:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:48.879 16:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.879 16:39:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:49.136 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.136 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:49.136 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.136 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:49.394 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.394 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:49.394 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.394 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:49.652 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:49.652 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:49.652 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.652 16:39:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:49.910 16:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:49.910 16:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:49.910 16:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:49.910 16:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:50.169 16:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.169 16:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:50.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:50.428 16:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:50.686 16:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:50.946 16:39:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:51.881 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:51.881 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:51.881 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.881 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:52.139 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.139 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:52.139 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.139 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:52.399 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.399 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:52.399 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.399 16:39:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:52.967 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:52.967 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:52.967 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:52.967 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:53.225 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.225 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:53.225 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.225 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:53.483 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.483 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:53.483 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.483 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:53.741 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.741 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:53.741 16:39:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:53.999 16:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:54.258 16:39:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:55.195 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:55.195 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:55.195 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.195 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:55.453 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:55.453 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:55.453 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.453 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:55.710 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.710 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:55.710 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.710 16:39:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:55.968 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:55.968 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:55.968 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.968 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:56.226 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.226 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:56.226 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.226 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:56.484 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.484 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:56.484 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.484 16:39:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:56.742 16:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.742 16:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:56.742 16:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:57.000 16:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:57.567 16:39:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:58.500 16:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:58.500 16:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:58.500 16:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:58.500 16:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.758 16:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.758 16:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:58.758 16:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.758 16:39:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:59.015 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.015 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:59.015 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.015 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:59.273 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.273 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:59.273 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.273 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:59.531 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.531 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:59.531 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.531 16:39:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:59.789 16:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.789 16:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:59.789 16:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.789 16:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:00.047 16:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:00.047 16:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:34:00.047 16:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:00.303 16:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:00.562 16:39:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:01.940 16:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:01.940 16:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:01.940 16:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.940 16:39:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:01.940 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.940 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:01.940 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.940 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:02.199 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:02.199 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:02.199 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.199 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:02.457 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.457 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:02.457 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.457 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:02.715 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.715 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:02.715 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.715 16:39:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:02.973 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.974 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:02.974 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.974 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:03.232 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:03.232 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 369596 00:34:03.232 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 369596 ']' 00:34:03.232 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 369596 00:34:03.232 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:03.232 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:03.232 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 369596 00:34:03.493 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:03.493 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:03.493 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 369596' 00:34:03.493 killing process with pid 369596 00:34:03.493 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 369596 00:34:03.493 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 369596 00:34:03.493 { 00:34:03.493 "results": [ 00:34:03.493 { 00:34:03.493 "job": "Nvme0n1", 00:34:03.493 "core_mask": "0x4", 00:34:03.493 "workload": "verify", 00:34:03.493 "status": "terminated", 00:34:03.493 "verify_range": { 00:34:03.493 "start": 0, 00:34:03.493 "length": 16384 00:34:03.493 }, 00:34:03.493 "queue_depth": 128, 00:34:03.493 "io_size": 4096, 00:34:03.493 "runtime": 34.384983, 00:34:03.493 "iops": 8159.783007599567, 00:34:03.493 "mibps": 31.87415237343581, 00:34:03.493 "io_failed": 0, 00:34:03.493 "io_timeout": 0, 00:34:03.493 "avg_latency_us": 15661.030948185848, 00:34:03.493 "min_latency_us": 442.9748148148148, 00:34:03.493 "max_latency_us": 4026531.84 00:34:03.493 } 00:34:03.493 ], 00:34:03.493 "core_count": 1 00:34:03.493 } 00:34:03.493 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 369596 00:34:03.493 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:03.493 [2024-11-19 16:39:17.509187] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:34:03.493 [2024-11-19 16:39:17.509268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid369596 ] 00:34:03.493 [2024-11-19 16:39:17.577824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.493 [2024-11-19 16:39:17.625108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:03.493 Running I/O for 90 seconds... 00:34:03.493 8779.00 IOPS, 34.29 MiB/s [2024-11-19T15:39:53.832Z] 8733.50 IOPS, 34.12 MiB/s [2024-11-19T15:39:53.832Z] 8806.67 IOPS, 34.40 MiB/s [2024-11-19T15:39:53.832Z] 8830.00 IOPS, 34.49 MiB/s [2024-11-19T15:39:53.832Z] 8847.80 IOPS, 34.56 MiB/s [2024-11-19T15:39:53.832Z] 8793.67 IOPS, 34.35 MiB/s [2024-11-19T15:39:53.832Z] 8770.14 IOPS, 34.26 MiB/s [2024-11-19T15:39:53.832Z] 8756.75 IOPS, 34.21 MiB/s [2024-11-19T15:39:53.832Z] 8754.56 IOPS, 34.20 MiB/s [2024-11-19T15:39:53.832Z] 8780.70 IOPS, 34.30 MiB/s [2024-11-19T15:39:53.832Z] 8799.82 IOPS, 34.37 MiB/s [2024-11-19T15:39:53.832Z] 8793.08 IOPS, 34.35 MiB/s [2024-11-19T15:39:53.832Z] 8802.54 IOPS, 34.38 MiB/s [2024-11-19T15:39:53.832Z] 8800.43 IOPS, 34.38 MiB/s [2024-11-19T15:39:53.832Z] 8789.53 IOPS, 34.33 MiB/s [2024-11-19T15:39:53.832Z] [2024-11-19 16:39:34.252389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.493 [2024-11-19 16:39:34.252448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:03.493 [2024-11-19 16:39:34.252525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.493 [2024-11-19 16:39:34.252545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:03.493 [2024-11-19 16:39:34.252569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.493 [2024-11-19 16:39:34.252586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.252608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.252624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.252646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.252662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.252684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.252700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.252722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.252738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.252760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.252776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.253734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.253761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.253805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:12488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.253823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.253847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:12496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.253863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.253886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.253901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.253924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.494 [2024-11-19 16:39:34.253941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.253964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.494 [2024-11-19 16:39:34.253980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.494 [2024-11-19 16:39:34.254018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:12312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.494 [2024-11-19 16:39:34.254057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.494 [2024-11-19 16:39:34.254104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:12328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.494 [2024-11-19 16:39:34.254143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.494 [2024-11-19 16:39:34.254196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:12344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.494 [2024-11-19 16:39:34.254237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:12512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:12544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:12560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:12568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:12576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:12600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:12608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:12616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:12624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.254963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.254986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:12656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.255003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.255026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.255042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.255065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:12672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.494 [2024-11-19 16:39:34.255090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:03.494 [2024-11-19 16:39:34.255115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:12680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:12688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:12704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:12728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:12736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:12744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:12768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:12776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:12792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:12808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:12816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:12824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:12832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:12840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.255979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.255995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:12856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:12352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.495 [2024-11-19 16:39:34.256303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:12864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:12912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:12920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:12944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:12952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:12960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:03.495 [2024-11-19 16:39:34.256960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.495 [2024-11-19 16:39:34.256976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:12984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:12992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:13040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:13064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:13080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:13096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:13120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.257979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:13152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.257995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:13160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.258038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.258103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.258150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:34.258195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.496 [2024-11-19 16:39:34.258240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:12368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.496 [2024-11-19 16:39:34.258289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.496 [2024-11-19 16:39:34.258332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.496 [2024-11-19 16:39:34.258376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.496 [2024-11-19 16:39:34.258420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:12400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.496 [2024-11-19 16:39:34.258464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:34.258491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.496 [2024-11-19 16:39:34.258508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:03.496 8288.00 IOPS, 32.38 MiB/s [2024-11-19T15:39:53.835Z] 7800.47 IOPS, 30.47 MiB/s [2024-11-19T15:39:53.835Z] 7367.11 IOPS, 28.78 MiB/s [2024-11-19T15:39:53.835Z] 6979.37 IOPS, 27.26 MiB/s [2024-11-19T15:39:53.835Z] 7033.10 IOPS, 27.47 MiB/s [2024-11-19T15:39:53.835Z] 7107.67 IOPS, 27.76 MiB/s [2024-11-19T15:39:53.835Z] 7193.95 IOPS, 28.10 MiB/s [2024-11-19T15:39:53.835Z] 7368.87 IOPS, 28.78 MiB/s [2024-11-19T15:39:53.835Z] 7520.38 IOPS, 29.38 MiB/s [2024-11-19T15:39:53.835Z] 7660.04 IOPS, 29.92 MiB/s [2024-11-19T15:39:53.835Z] 7699.23 IOPS, 30.08 MiB/s [2024-11-19T15:39:53.835Z] 7732.33 IOPS, 30.20 MiB/s [2024-11-19T15:39:53.835Z] 7759.25 IOPS, 30.31 MiB/s [2024-11-19T15:39:53.835Z] 7832.45 IOPS, 30.60 MiB/s [2024-11-19T15:39:53.835Z] 7944.03 IOPS, 31.03 MiB/s [2024-11-19T15:39:53.835Z] 8043.84 IOPS, 31.42 MiB/s [2024-11-19T15:39:53.835Z] [2024-11-19 16:39:50.828027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:50.828330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:50.828393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:83528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:50.828423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:03.496 [2024-11-19 16:39:50.828448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:83544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.496 [2024-11-19 16:39:50.828465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.828504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:83576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.828543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:83592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.828595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.828634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:83624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.828688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:83640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.828743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:83656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.828782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.828820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:83688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.828859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.828897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:83208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.828935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:83240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.828974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.828996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:83272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:83336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:83368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:83152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:83184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:83216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:83280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:83312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:83344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:83376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:83416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:83448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:83480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:83504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:83408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:83440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.829760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:83472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:03.497 [2024-11-19 16:39:50.829776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.830839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:83696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.830865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.830892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:83712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.830910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.830933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:83728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.830950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.830972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:83744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.830988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.831010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:83760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.831026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.831049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.831065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:03.497 [2024-11-19 16:39:50.831097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:83792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.497 [2024-11-19 16:39:50.831140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:03.498 [2024-11-19 16:39:50.831165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:83808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.498 [2024-11-19 16:39:50.831187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:03.498 [2024-11-19 16:39:50.831211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:83824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.498 [2024-11-19 16:39:50.831227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:03.498 [2024-11-19 16:39:50.833788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:83840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.498 [2024-11-19 16:39:50.833814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:03.498 [2024-11-19 16:39:50.833842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:83856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.498 [2024-11-19 16:39:50.833860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:03.498 [2024-11-19 16:39:50.833882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:83872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.498 [2024-11-19 16:39:50.833898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:03.498 [2024-11-19 16:39:50.833920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:83888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.498 [2024-11-19 16:39:50.833936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:03.498 [2024-11-19 16:39:50.833958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:83904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.498 [2024-11-19 16:39:50.833974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:03.498 [2024-11-19 16:39:50.833997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:83920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:03.498 [2024-11-19 16:39:50.834013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:03.498 8112.66 IOPS, 31.69 MiB/s [2024-11-19T15:39:53.837Z] 8132.79 IOPS, 31.77 MiB/s [2024-11-19T15:39:53.837Z] 8153.71 IOPS, 31.85 MiB/s [2024-11-19T15:39:53.837Z] Received shutdown signal, test time was about 34.385767 seconds 00:34:03.498 00:34:03.498 Latency(us) 00:34:03.498 [2024-11-19T15:39:53.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:03.498 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:03.498 Verification LBA range: start 0x0 length 0x4000 00:34:03.498 Nvme0n1 : 34.38 8159.78 31.87 0.00 0.00 15661.03 442.97 4026531.84 00:34:03.498 [2024-11-19T15:39:53.837Z] =================================================================================================================== 00:34:03.498 [2024-11-19T15:39:53.837Z] Total : 8159.78 31.87 0.00 0.00 15661.03 442.97 4026531.84 00:34:03.498 16:39:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:03.757 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:03.757 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:03.757 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:03.757 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:03.757 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:03.757 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:03.757 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:03.757 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:03.757 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:03.757 rmmod nvme_tcp 00:34:03.757 rmmod nvme_fabrics 00:34:04.017 rmmod nvme_keyring 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 369315 ']' 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 369315 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 369315 ']' 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 369315 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 369315 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 369315' 00:34:04.017 killing process with pid 369315 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 369315 00:34:04.017 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 369315 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:04.278 16:39:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:06.188 00:34:06.188 real 0m43.352s 00:34:06.188 user 2m12.417s 00:34:06.188 sys 0m10.579s 00:34:06.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:06.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:06.188 ************************************ 00:34:06.188 END TEST nvmf_host_multipath_status 00:34:06.188 ************************************ 00:34:06.188 16:39:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:06.188 16:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:06.188 16:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:06.188 16:39:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:06.188 ************************************ 00:34:06.188 START TEST nvmf_discovery_remove_ifc 00:34:06.188 ************************************ 00:34:06.188 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:06.448 * Looking for test storage... 00:34:06.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:06.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.448 --rc genhtml_branch_coverage=1 00:34:06.448 --rc genhtml_function_coverage=1 00:34:06.448 --rc genhtml_legend=1 00:34:06.448 --rc geninfo_all_blocks=1 00:34:06.448 --rc geninfo_unexecuted_blocks=1 00:34:06.448 00:34:06.448 ' 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:06.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.448 --rc genhtml_branch_coverage=1 00:34:06.448 --rc genhtml_function_coverage=1 00:34:06.448 --rc genhtml_legend=1 00:34:06.448 --rc geninfo_all_blocks=1 00:34:06.448 --rc geninfo_unexecuted_blocks=1 00:34:06.448 00:34:06.448 ' 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:06.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.448 --rc genhtml_branch_coverage=1 00:34:06.448 --rc genhtml_function_coverage=1 00:34:06.448 --rc genhtml_legend=1 00:34:06.448 --rc geninfo_all_blocks=1 00:34:06.448 --rc geninfo_unexecuted_blocks=1 00:34:06.448 00:34:06.448 ' 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:06.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:06.448 --rc genhtml_branch_coverage=1 00:34:06.448 --rc genhtml_function_coverage=1 00:34:06.448 --rc genhtml_legend=1 00:34:06.448 --rc geninfo_all_blocks=1 00:34:06.448 --rc geninfo_unexecuted_blocks=1 00:34:06.448 00:34:06.448 ' 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:06.448 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:06.449 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:06.449 16:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.983 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:08.984 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:08.984 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:08.984 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:08.984 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:08.984 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:08.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:08.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:34:08.985 00:34:08.985 --- 10.0.0.2 ping statistics --- 00:34:08.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.985 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:08.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:08.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:34:08.985 00:34:08.985 --- 10.0.0.1 ping statistics --- 00:34:08.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:08.985 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=376051 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 376051 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 376051 ']' 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:08.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:08.985 16:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.985 [2024-11-19 16:39:58.998005] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:34:08.985 [2024-11-19 16:39:58.998124] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:08.985 [2024-11-19 16:39:59.068692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:08.985 [2024-11-19 16:39:59.113912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:08.985 [2024-11-19 16:39:59.113998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:08.985 [2024-11-19 16:39:59.114013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:08.985 [2024-11-19 16:39:59.114025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:08.985 [2024-11-19 16:39:59.114035] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:08.985 [2024-11-19 16:39:59.114650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:08.985 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:08.985 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:08.985 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:08.985 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:08.985 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.985 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:08.985 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:08.985 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:08.985 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.985 [2024-11-19 16:39:59.263442] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:08.985 [2024-11-19 16:39:59.271647] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:08.985 null0 00:34:08.985 [2024-11-19 16:39:59.303552] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:09.244 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.244 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=376076 00:34:09.244 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 376076 /tmp/host.sock 00:34:09.244 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 376076 ']' 00:34:09.244 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:09.244 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:09.244 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:09.244 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:09.244 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:09.244 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:09.244 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.244 [2024-11-19 16:39:59.372487] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:34:09.244 [2024-11-19 16:39:59.372555] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid376076 ] 00:34:09.244 [2024-11-19 16:39:59.439529] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:09.244 [2024-11-19 16:39:59.485554] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:09.503 16:39:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:10.442 [2024-11-19 16:40:00.762775] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:10.442 [2024-11-19 16:40:00.762809] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:10.442 [2024-11-19 16:40:00.762836] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:10.701 [2024-11-19 16:40:00.890266] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:10.961 [2024-11-19 16:40:01.112707] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:10.961 [2024-11-19 16:40:01.113865] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xdebc00:1 started. 00:34:10.961 [2024-11-19 16:40:01.115633] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:10.961 [2024-11-19 16:40:01.115693] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:10.961 [2024-11-19 16:40:01.115730] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:10.961 [2024-11-19 16:40:01.115754] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:10.961 [2024-11-19 16:40:01.115789] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:10.961 [2024-11-19 16:40:01.119932] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xdebc00 was disconnected and freed. delete nvme_qpair. 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:10.961 16:40:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:12.340 16:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:12.340 16:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:12.340 16:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:12.340 16:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.340 16:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.340 16:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:12.340 16:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:12.340 16:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.340 16:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:12.340 16:40:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:13.275 16:40:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:13.275 16:40:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.275 16:40:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:13.275 16:40:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:13.275 16:40:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.275 16:40:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.275 16:40:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:13.275 16:40:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.275 16:40:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:13.275 16:40:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:14.214 16:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:14.214 16:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.214 16:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:14.214 16:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.214 16:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:14.214 16:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.214 16:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:14.214 16:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.214 16:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:14.214 16:40:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:15.152 16:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:15.152 16:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.152 16:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:15.152 16:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.152 16:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:15.152 16:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:15.152 16:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:15.152 16:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.152 16:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:15.152 16:40:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:16.532 16:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:16.532 16:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.532 16:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.532 16:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:16.533 16:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:16.533 16:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.533 16:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:16.533 16:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.533 16:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:16.533 16:40:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:16.533 [2024-11-19 16:40:06.556819] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:16.533 [2024-11-19 16:40:06.556904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.533 [2024-11-19 16:40:06.556925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.533 [2024-11-19 16:40:06.556941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.533 [2024-11-19 16:40:06.556954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.533 [2024-11-19 16:40:06.556968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.533 [2024-11-19 16:40:06.556981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.533 [2024-11-19 16:40:06.556994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.533 [2024-11-19 16:40:06.557006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.533 [2024-11-19 16:40:06.557019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:16.533 [2024-11-19 16:40:06.557032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:16.533 [2024-11-19 16:40:06.557044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8400 is same with the state(6) to be set 00:34:16.533 [2024-11-19 16:40:06.566838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc8400 (9): Bad file descriptor 00:34:16.533 [2024-11-19 16:40:06.576880] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:16.533 [2024-11-19 16:40:06.576901] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:16.533 [2024-11-19 16:40:06.576911] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:16.533 [2024-11-19 16:40:06.576919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:16.533 [2024-11-19 16:40:06.576969] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:17.467 16:40:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:17.467 16:40:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.467 16:40:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.467 16:40:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:17.467 16:40:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:17.467 16:40:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:17.467 16:40:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.467 [2024-11-19 16:40:07.615115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:17.467 [2024-11-19 16:40:07.615196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdc8400 with addr=10.0.0.2, port=4420 00:34:17.467 [2024-11-19 16:40:07.615219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdc8400 is same with the state(6) to be set 00:34:17.467 [2024-11-19 16:40:07.615262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc8400 (9): Bad file descriptor 00:34:17.467 [2024-11-19 16:40:07.615673] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:17.467 [2024-11-19 16:40:07.615714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:17.467 [2024-11-19 16:40:07.615731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:17.467 [2024-11-19 16:40:07.615745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:17.467 [2024-11-19 16:40:07.615758] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:17.467 [2024-11-19 16:40:07.615769] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:17.467 [2024-11-19 16:40:07.615777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:17.467 [2024-11-19 16:40:07.615790] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:17.467 [2024-11-19 16:40:07.615798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:17.467 16:40:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.467 16:40:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:17.467 16:40:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:18.405 [2024-11-19 16:40:08.618287] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:18.405 [2024-11-19 16:40:08.618313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:18.405 [2024-11-19 16:40:08.618330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:18.405 [2024-11-19 16:40:08.618353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:18.405 [2024-11-19 16:40:08.618366] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:18.405 [2024-11-19 16:40:08.618377] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:18.405 [2024-11-19 16:40:08.618385] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:18.405 [2024-11-19 16:40:08.618392] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:18.405 [2024-11-19 16:40:08.618435] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:18.405 [2024-11-19 16:40:08.618484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.405 [2024-11-19 16:40:08.618505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.405 [2024-11-19 16:40:08.618522] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.405 [2024-11-19 16:40:08.618535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.405 [2024-11-19 16:40:08.618548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.405 [2024-11-19 16:40:08.618560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.405 [2024-11-19 16:40:08.618573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.405 [2024-11-19 16:40:08.618585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.405 [2024-11-19 16:40:08.618598] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:18.405 [2024-11-19 16:40:08.618610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:18.405 [2024-11-19 16:40:08.618623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:18.405 [2024-11-19 16:40:08.618800] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdb7b40 (9): Bad file descriptor 00:34:18.405 [2024-11-19 16:40:08.619818] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:18.405 [2024-11-19 16:40:08.619838] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:18.405 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:18.406 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:18.406 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:18.406 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:18.406 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.406 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:18.406 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:18.406 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:18.406 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.664 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:18.664 16:40:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:19.596 16:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:19.596 16:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.596 16:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:19.596 16:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.596 16:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.596 16:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:19.596 16:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:19.596 16:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.596 16:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:19.597 16:40:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:20.534 [2024-11-19 16:40:10.676714] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:20.534 [2024-11-19 16:40:10.676749] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:20.534 [2024-11-19 16:40:10.676771] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:20.534 16:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:20.534 16:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.534 16:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:20.534 16:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.534 16:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:20.534 16:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.534 16:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:20.534 [2024-11-19 16:40:10.805224] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:20.534 16:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.534 16:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:20.534 16:40:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:20.534 [2024-11-19 16:40:10.865874] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:20.535 [2024-11-19 16:40:10.866658] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xdc8160:1 started. 00:34:20.535 [2024-11-19 16:40:10.868087] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:20.535 [2024-11-19 16:40:10.868142] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:20.535 [2024-11-19 16:40:10.868172] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:20.535 [2024-11-19 16:40:10.868196] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:20.535 [2024-11-19 16:40:10.868209] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:20.795 [2024-11-19 16:40:10.875845] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xdc8160 was disconnected and freed. delete nvme_qpair. 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 376076 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 376076 ']' 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 376076 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:21.731 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:21.732 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376076 00:34:21.732 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:21.732 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:21.732 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376076' 00:34:21.732 killing process with pid 376076 00:34:21.732 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 376076 00:34:21.732 16:40:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 376076 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:21.991 rmmod nvme_tcp 00:34:21.991 rmmod nvme_fabrics 00:34:21.991 rmmod nvme_keyring 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 376051 ']' 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 376051 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 376051 ']' 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 376051 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 376051 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 376051' 00:34:21.991 killing process with pid 376051 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 376051 00:34:21.991 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 376051 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:22.250 16:40:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:24.285 00:34:24.285 real 0m17.958s 00:34:24.285 user 0m25.939s 00:34:24.285 sys 0m3.162s 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:24.285 ************************************ 00:34:24.285 END TEST nvmf_discovery_remove_ifc 00:34:24.285 ************************************ 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:24.285 ************************************ 00:34:24.285 START TEST nvmf_identify_kernel_target 00:34:24.285 ************************************ 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:24.285 * Looking for test storage... 00:34:24.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:24.285 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.591 --rc genhtml_branch_coverage=1 00:34:24.591 --rc genhtml_function_coverage=1 00:34:24.591 --rc genhtml_legend=1 00:34:24.591 --rc geninfo_all_blocks=1 00:34:24.591 --rc geninfo_unexecuted_blocks=1 00:34:24.591 00:34:24.591 ' 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.591 --rc genhtml_branch_coverage=1 00:34:24.591 --rc genhtml_function_coverage=1 00:34:24.591 --rc genhtml_legend=1 00:34:24.591 --rc geninfo_all_blocks=1 00:34:24.591 --rc geninfo_unexecuted_blocks=1 00:34:24.591 00:34:24.591 ' 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.591 --rc genhtml_branch_coverage=1 00:34:24.591 --rc genhtml_function_coverage=1 00:34:24.591 --rc genhtml_legend=1 00:34:24.591 --rc geninfo_all_blocks=1 00:34:24.591 --rc geninfo_unexecuted_blocks=1 00:34:24.591 00:34:24.591 ' 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:24.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:24.591 --rc genhtml_branch_coverage=1 00:34:24.591 --rc genhtml_function_coverage=1 00:34:24.591 --rc genhtml_legend=1 00:34:24.591 --rc geninfo_all_blocks=1 00:34:24.591 --rc geninfo_unexecuted_blocks=1 00:34:24.591 00:34:24.591 ' 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.591 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:24.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:24.592 16:40:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:26.570 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:26.570 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:26.570 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:26.570 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.570 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:26.571 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.571 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:34:26.571 00:34:26.571 --- 10.0.0.2 ping statistics --- 00:34:26.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.571 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:26.571 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.571 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:34:26.571 00:34:26.571 --- 10.0.0.1 ping statistics --- 00:34:26.571 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.571 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:26.571 16:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:27.948 Waiting for block devices as requested 00:34:27.948 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:27.948 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:27.948 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:28.207 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:28.207 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:28.207 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:28.207 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:28.468 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:28.468 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:28.468 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:28.468 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:28.468 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:28.727 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:28.727 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:28.727 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:28.986 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:28.986 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:28.986 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:28.986 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:28.986 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:28.986 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:28.986 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:28.986 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:28.986 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:28.986 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:28.986 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:29.246 No valid GPT data, bailing 00:34:29.246 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:29.246 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:29.246 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:29.247 00:34:29.247 Discovery Log Number of Records 2, Generation counter 2 00:34:29.247 =====Discovery Log Entry 0====== 00:34:29.247 trtype: tcp 00:34:29.247 adrfam: ipv4 00:34:29.247 subtype: current discovery subsystem 00:34:29.247 treq: not specified, sq flow control disable supported 00:34:29.247 portid: 1 00:34:29.247 trsvcid: 4420 00:34:29.247 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:29.247 traddr: 10.0.0.1 00:34:29.247 eflags: none 00:34:29.247 sectype: none 00:34:29.247 =====Discovery Log Entry 1====== 00:34:29.247 trtype: tcp 00:34:29.247 adrfam: ipv4 00:34:29.247 subtype: nvme subsystem 00:34:29.247 treq: not specified, sq flow control disable supported 00:34:29.247 portid: 1 00:34:29.247 trsvcid: 4420 00:34:29.247 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:29.247 traddr: 10.0.0.1 00:34:29.247 eflags: none 00:34:29.247 sectype: none 00:34:29.247 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:29.247 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:29.507 ===================================================== 00:34:29.507 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:29.507 ===================================================== 00:34:29.507 Controller Capabilities/Features 00:34:29.507 ================================ 00:34:29.507 Vendor ID: 0000 00:34:29.507 Subsystem Vendor ID: 0000 00:34:29.507 Serial Number: a134556ce53693b0a1ae 00:34:29.507 Model Number: Linux 00:34:29.507 Firmware Version: 6.8.9-20 00:34:29.507 Recommended Arb Burst: 0 00:34:29.507 IEEE OUI Identifier: 00 00 00 00:34:29.507 Multi-path I/O 00:34:29.507 May have multiple subsystem ports: No 00:34:29.507 May have multiple controllers: No 00:34:29.507 Associated with SR-IOV VF: No 00:34:29.507 Max Data Transfer Size: Unlimited 00:34:29.507 Max Number of Namespaces: 0 00:34:29.507 Max Number of I/O Queues: 1024 00:34:29.507 NVMe Specification Version (VS): 1.3 00:34:29.507 NVMe Specification Version (Identify): 1.3 00:34:29.507 Maximum Queue Entries: 1024 00:34:29.507 Contiguous Queues Required: No 00:34:29.507 Arbitration Mechanisms Supported 00:34:29.507 Weighted Round Robin: Not Supported 00:34:29.507 Vendor Specific: Not Supported 00:34:29.507 Reset Timeout: 7500 ms 00:34:29.507 Doorbell Stride: 4 bytes 00:34:29.507 NVM Subsystem Reset: Not Supported 00:34:29.507 Command Sets Supported 00:34:29.507 NVM Command Set: Supported 00:34:29.507 Boot Partition: Not Supported 00:34:29.507 Memory Page Size Minimum: 4096 bytes 00:34:29.507 Memory Page Size Maximum: 4096 bytes 00:34:29.507 Persistent Memory Region: Not Supported 00:34:29.507 Optional Asynchronous Events Supported 00:34:29.507 Namespace Attribute Notices: Not Supported 00:34:29.507 Firmware Activation Notices: Not Supported 00:34:29.507 ANA Change Notices: Not Supported 00:34:29.507 PLE Aggregate Log Change Notices: Not Supported 00:34:29.507 LBA Status Info Alert Notices: Not Supported 00:34:29.508 EGE Aggregate Log Change Notices: Not Supported 00:34:29.508 Normal NVM Subsystem Shutdown event: Not Supported 00:34:29.508 Zone Descriptor Change Notices: Not Supported 00:34:29.508 Discovery Log Change Notices: Supported 00:34:29.508 Controller Attributes 00:34:29.508 128-bit Host Identifier: Not Supported 00:34:29.508 Non-Operational Permissive Mode: Not Supported 00:34:29.508 NVM Sets: Not Supported 00:34:29.508 Read Recovery Levels: Not Supported 00:34:29.508 Endurance Groups: Not Supported 00:34:29.508 Predictable Latency Mode: Not Supported 00:34:29.508 Traffic Based Keep ALive: Not Supported 00:34:29.508 Namespace Granularity: Not Supported 00:34:29.508 SQ Associations: Not Supported 00:34:29.508 UUID List: Not Supported 00:34:29.508 Multi-Domain Subsystem: Not Supported 00:34:29.508 Fixed Capacity Management: Not Supported 00:34:29.508 Variable Capacity Management: Not Supported 00:34:29.508 Delete Endurance Group: Not Supported 00:34:29.508 Delete NVM Set: Not Supported 00:34:29.508 Extended LBA Formats Supported: Not Supported 00:34:29.508 Flexible Data Placement Supported: Not Supported 00:34:29.508 00:34:29.508 Controller Memory Buffer Support 00:34:29.508 ================================ 00:34:29.508 Supported: No 00:34:29.508 00:34:29.508 Persistent Memory Region Support 00:34:29.508 ================================ 00:34:29.508 Supported: No 00:34:29.508 00:34:29.508 Admin Command Set Attributes 00:34:29.508 ============================ 00:34:29.508 Security Send/Receive: Not Supported 00:34:29.508 Format NVM: Not Supported 00:34:29.508 Firmware Activate/Download: Not Supported 00:34:29.508 Namespace Management: Not Supported 00:34:29.508 Device Self-Test: Not Supported 00:34:29.508 Directives: Not Supported 00:34:29.508 NVMe-MI: Not Supported 00:34:29.508 Virtualization Management: Not Supported 00:34:29.508 Doorbell Buffer Config: Not Supported 00:34:29.508 Get LBA Status Capability: Not Supported 00:34:29.508 Command & Feature Lockdown Capability: Not Supported 00:34:29.508 Abort Command Limit: 1 00:34:29.508 Async Event Request Limit: 1 00:34:29.508 Number of Firmware Slots: N/A 00:34:29.508 Firmware Slot 1 Read-Only: N/A 00:34:29.508 Firmware Activation Without Reset: N/A 00:34:29.508 Multiple Update Detection Support: N/A 00:34:29.508 Firmware Update Granularity: No Information Provided 00:34:29.508 Per-Namespace SMART Log: No 00:34:29.508 Asymmetric Namespace Access Log Page: Not Supported 00:34:29.508 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:29.508 Command Effects Log Page: Not Supported 00:34:29.508 Get Log Page Extended Data: Supported 00:34:29.508 Telemetry Log Pages: Not Supported 00:34:29.508 Persistent Event Log Pages: Not Supported 00:34:29.508 Supported Log Pages Log Page: May Support 00:34:29.508 Commands Supported & Effects Log Page: Not Supported 00:34:29.508 Feature Identifiers & Effects Log Page:May Support 00:34:29.508 NVMe-MI Commands & Effects Log Page: May Support 00:34:29.508 Data Area 4 for Telemetry Log: Not Supported 00:34:29.508 Error Log Page Entries Supported: 1 00:34:29.508 Keep Alive: Not Supported 00:34:29.508 00:34:29.508 NVM Command Set Attributes 00:34:29.508 ========================== 00:34:29.508 Submission Queue Entry Size 00:34:29.508 Max: 1 00:34:29.508 Min: 1 00:34:29.508 Completion Queue Entry Size 00:34:29.508 Max: 1 00:34:29.508 Min: 1 00:34:29.508 Number of Namespaces: 0 00:34:29.508 Compare Command: Not Supported 00:34:29.508 Write Uncorrectable Command: Not Supported 00:34:29.508 Dataset Management Command: Not Supported 00:34:29.508 Write Zeroes Command: Not Supported 00:34:29.508 Set Features Save Field: Not Supported 00:34:29.508 Reservations: Not Supported 00:34:29.508 Timestamp: Not Supported 00:34:29.508 Copy: Not Supported 00:34:29.508 Volatile Write Cache: Not Present 00:34:29.508 Atomic Write Unit (Normal): 1 00:34:29.508 Atomic Write Unit (PFail): 1 00:34:29.508 Atomic Compare & Write Unit: 1 00:34:29.508 Fused Compare & Write: Not Supported 00:34:29.508 Scatter-Gather List 00:34:29.508 SGL Command Set: Supported 00:34:29.508 SGL Keyed: Not Supported 00:34:29.508 SGL Bit Bucket Descriptor: Not Supported 00:34:29.508 SGL Metadata Pointer: Not Supported 00:34:29.508 Oversized SGL: Not Supported 00:34:29.508 SGL Metadata Address: Not Supported 00:34:29.508 SGL Offset: Supported 00:34:29.508 Transport SGL Data Block: Not Supported 00:34:29.508 Replay Protected Memory Block: Not Supported 00:34:29.508 00:34:29.508 Firmware Slot Information 00:34:29.508 ========================= 00:34:29.508 Active slot: 0 00:34:29.508 00:34:29.508 00:34:29.508 Error Log 00:34:29.508 ========= 00:34:29.508 00:34:29.508 Active Namespaces 00:34:29.508 ================= 00:34:29.508 Discovery Log Page 00:34:29.508 ================== 00:34:29.508 Generation Counter: 2 00:34:29.508 Number of Records: 2 00:34:29.508 Record Format: 0 00:34:29.508 00:34:29.508 Discovery Log Entry 0 00:34:29.508 ---------------------- 00:34:29.508 Transport Type: 3 (TCP) 00:34:29.508 Address Family: 1 (IPv4) 00:34:29.508 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:29.508 Entry Flags: 00:34:29.508 Duplicate Returned Information: 0 00:34:29.508 Explicit Persistent Connection Support for Discovery: 0 00:34:29.508 Transport Requirements: 00:34:29.508 Secure Channel: Not Specified 00:34:29.508 Port ID: 1 (0x0001) 00:34:29.508 Controller ID: 65535 (0xffff) 00:34:29.508 Admin Max SQ Size: 32 00:34:29.508 Transport Service Identifier: 4420 00:34:29.508 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:29.508 Transport Address: 10.0.0.1 00:34:29.508 Discovery Log Entry 1 00:34:29.508 ---------------------- 00:34:29.508 Transport Type: 3 (TCP) 00:34:29.508 Address Family: 1 (IPv4) 00:34:29.508 Subsystem Type: 2 (NVM Subsystem) 00:34:29.508 Entry Flags: 00:34:29.508 Duplicate Returned Information: 0 00:34:29.508 Explicit Persistent Connection Support for Discovery: 0 00:34:29.508 Transport Requirements: 00:34:29.508 Secure Channel: Not Specified 00:34:29.508 Port ID: 1 (0x0001) 00:34:29.508 Controller ID: 65535 (0xffff) 00:34:29.508 Admin Max SQ Size: 32 00:34:29.508 Transport Service Identifier: 4420 00:34:29.508 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:29.508 Transport Address: 10.0.0.1 00:34:29.508 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:29.508 get_feature(0x01) failed 00:34:29.508 get_feature(0x02) failed 00:34:29.508 get_feature(0x04) failed 00:34:29.508 ===================================================== 00:34:29.508 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:29.508 ===================================================== 00:34:29.508 Controller Capabilities/Features 00:34:29.508 ================================ 00:34:29.508 Vendor ID: 0000 00:34:29.508 Subsystem Vendor ID: 0000 00:34:29.508 Serial Number: d3c08a689bbe655d8ec1 00:34:29.508 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:29.508 Firmware Version: 6.8.9-20 00:34:29.508 Recommended Arb Burst: 6 00:34:29.508 IEEE OUI Identifier: 00 00 00 00:34:29.508 Multi-path I/O 00:34:29.508 May have multiple subsystem ports: Yes 00:34:29.508 May have multiple controllers: Yes 00:34:29.508 Associated with SR-IOV VF: No 00:34:29.508 Max Data Transfer Size: Unlimited 00:34:29.508 Max Number of Namespaces: 1024 00:34:29.508 Max Number of I/O Queues: 128 00:34:29.508 NVMe Specification Version (VS): 1.3 00:34:29.508 NVMe Specification Version (Identify): 1.3 00:34:29.508 Maximum Queue Entries: 1024 00:34:29.508 Contiguous Queues Required: No 00:34:29.508 Arbitration Mechanisms Supported 00:34:29.508 Weighted Round Robin: Not Supported 00:34:29.508 Vendor Specific: Not Supported 00:34:29.508 Reset Timeout: 7500 ms 00:34:29.508 Doorbell Stride: 4 bytes 00:34:29.508 NVM Subsystem Reset: Not Supported 00:34:29.508 Command Sets Supported 00:34:29.508 NVM Command Set: Supported 00:34:29.508 Boot Partition: Not Supported 00:34:29.508 Memory Page Size Minimum: 4096 bytes 00:34:29.508 Memory Page Size Maximum: 4096 bytes 00:34:29.508 Persistent Memory Region: Not Supported 00:34:29.508 Optional Asynchronous Events Supported 00:34:29.508 Namespace Attribute Notices: Supported 00:34:29.508 Firmware Activation Notices: Not Supported 00:34:29.508 ANA Change Notices: Supported 00:34:29.508 PLE Aggregate Log Change Notices: Not Supported 00:34:29.508 LBA Status Info Alert Notices: Not Supported 00:34:29.508 EGE Aggregate Log Change Notices: Not Supported 00:34:29.508 Normal NVM Subsystem Shutdown event: Not Supported 00:34:29.508 Zone Descriptor Change Notices: Not Supported 00:34:29.509 Discovery Log Change Notices: Not Supported 00:34:29.509 Controller Attributes 00:34:29.509 128-bit Host Identifier: Supported 00:34:29.509 Non-Operational Permissive Mode: Not Supported 00:34:29.509 NVM Sets: Not Supported 00:34:29.509 Read Recovery Levels: Not Supported 00:34:29.509 Endurance Groups: Not Supported 00:34:29.509 Predictable Latency Mode: Not Supported 00:34:29.509 Traffic Based Keep ALive: Supported 00:34:29.509 Namespace Granularity: Not Supported 00:34:29.509 SQ Associations: Not Supported 00:34:29.509 UUID List: Not Supported 00:34:29.509 Multi-Domain Subsystem: Not Supported 00:34:29.509 Fixed Capacity Management: Not Supported 00:34:29.509 Variable Capacity Management: Not Supported 00:34:29.509 Delete Endurance Group: Not Supported 00:34:29.509 Delete NVM Set: Not Supported 00:34:29.509 Extended LBA Formats Supported: Not Supported 00:34:29.509 Flexible Data Placement Supported: Not Supported 00:34:29.509 00:34:29.509 Controller Memory Buffer Support 00:34:29.509 ================================ 00:34:29.509 Supported: No 00:34:29.509 00:34:29.509 Persistent Memory Region Support 00:34:29.509 ================================ 00:34:29.509 Supported: No 00:34:29.509 00:34:29.509 Admin Command Set Attributes 00:34:29.509 ============================ 00:34:29.509 Security Send/Receive: Not Supported 00:34:29.509 Format NVM: Not Supported 00:34:29.509 Firmware Activate/Download: Not Supported 00:34:29.509 Namespace Management: Not Supported 00:34:29.509 Device Self-Test: Not Supported 00:34:29.509 Directives: Not Supported 00:34:29.509 NVMe-MI: Not Supported 00:34:29.509 Virtualization Management: Not Supported 00:34:29.509 Doorbell Buffer Config: Not Supported 00:34:29.509 Get LBA Status Capability: Not Supported 00:34:29.509 Command & Feature Lockdown Capability: Not Supported 00:34:29.509 Abort Command Limit: 4 00:34:29.509 Async Event Request Limit: 4 00:34:29.509 Number of Firmware Slots: N/A 00:34:29.509 Firmware Slot 1 Read-Only: N/A 00:34:29.509 Firmware Activation Without Reset: N/A 00:34:29.509 Multiple Update Detection Support: N/A 00:34:29.509 Firmware Update Granularity: No Information Provided 00:34:29.509 Per-Namespace SMART Log: Yes 00:34:29.509 Asymmetric Namespace Access Log Page: Supported 00:34:29.509 ANA Transition Time : 10 sec 00:34:29.509 00:34:29.509 Asymmetric Namespace Access Capabilities 00:34:29.509 ANA Optimized State : Supported 00:34:29.509 ANA Non-Optimized State : Supported 00:34:29.509 ANA Inaccessible State : Supported 00:34:29.509 ANA Persistent Loss State : Supported 00:34:29.509 ANA Change State : Supported 00:34:29.509 ANAGRPID is not changed : No 00:34:29.509 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:29.509 00:34:29.509 ANA Group Identifier Maximum : 128 00:34:29.509 Number of ANA Group Identifiers : 128 00:34:29.509 Max Number of Allowed Namespaces : 1024 00:34:29.509 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:29.509 Command Effects Log Page: Supported 00:34:29.509 Get Log Page Extended Data: Supported 00:34:29.509 Telemetry Log Pages: Not Supported 00:34:29.509 Persistent Event Log Pages: Not Supported 00:34:29.509 Supported Log Pages Log Page: May Support 00:34:29.509 Commands Supported & Effects Log Page: Not Supported 00:34:29.509 Feature Identifiers & Effects Log Page:May Support 00:34:29.509 NVMe-MI Commands & Effects Log Page: May Support 00:34:29.509 Data Area 4 for Telemetry Log: Not Supported 00:34:29.509 Error Log Page Entries Supported: 128 00:34:29.509 Keep Alive: Supported 00:34:29.509 Keep Alive Granularity: 1000 ms 00:34:29.509 00:34:29.509 NVM Command Set Attributes 00:34:29.509 ========================== 00:34:29.509 Submission Queue Entry Size 00:34:29.509 Max: 64 00:34:29.509 Min: 64 00:34:29.509 Completion Queue Entry Size 00:34:29.509 Max: 16 00:34:29.509 Min: 16 00:34:29.509 Number of Namespaces: 1024 00:34:29.509 Compare Command: Not Supported 00:34:29.509 Write Uncorrectable Command: Not Supported 00:34:29.509 Dataset Management Command: Supported 00:34:29.509 Write Zeroes Command: Supported 00:34:29.509 Set Features Save Field: Not Supported 00:34:29.509 Reservations: Not Supported 00:34:29.509 Timestamp: Not Supported 00:34:29.509 Copy: Not Supported 00:34:29.509 Volatile Write Cache: Present 00:34:29.509 Atomic Write Unit (Normal): 1 00:34:29.509 Atomic Write Unit (PFail): 1 00:34:29.509 Atomic Compare & Write Unit: 1 00:34:29.509 Fused Compare & Write: Not Supported 00:34:29.509 Scatter-Gather List 00:34:29.509 SGL Command Set: Supported 00:34:29.509 SGL Keyed: Not Supported 00:34:29.509 SGL Bit Bucket Descriptor: Not Supported 00:34:29.509 SGL Metadata Pointer: Not Supported 00:34:29.509 Oversized SGL: Not Supported 00:34:29.509 SGL Metadata Address: Not Supported 00:34:29.509 SGL Offset: Supported 00:34:29.509 Transport SGL Data Block: Not Supported 00:34:29.509 Replay Protected Memory Block: Not Supported 00:34:29.509 00:34:29.509 Firmware Slot Information 00:34:29.509 ========================= 00:34:29.509 Active slot: 0 00:34:29.509 00:34:29.509 Asymmetric Namespace Access 00:34:29.509 =========================== 00:34:29.509 Change Count : 0 00:34:29.509 Number of ANA Group Descriptors : 1 00:34:29.509 ANA Group Descriptor : 0 00:34:29.509 ANA Group ID : 1 00:34:29.509 Number of NSID Values : 1 00:34:29.509 Change Count : 0 00:34:29.509 ANA State : 1 00:34:29.509 Namespace Identifier : 1 00:34:29.509 00:34:29.509 Commands Supported and Effects 00:34:29.509 ============================== 00:34:29.509 Admin Commands 00:34:29.509 -------------- 00:34:29.509 Get Log Page (02h): Supported 00:34:29.509 Identify (06h): Supported 00:34:29.509 Abort (08h): Supported 00:34:29.509 Set Features (09h): Supported 00:34:29.509 Get Features (0Ah): Supported 00:34:29.509 Asynchronous Event Request (0Ch): Supported 00:34:29.509 Keep Alive (18h): Supported 00:34:29.509 I/O Commands 00:34:29.509 ------------ 00:34:29.509 Flush (00h): Supported 00:34:29.509 Write (01h): Supported LBA-Change 00:34:29.509 Read (02h): Supported 00:34:29.509 Write Zeroes (08h): Supported LBA-Change 00:34:29.509 Dataset Management (09h): Supported 00:34:29.509 00:34:29.509 Error Log 00:34:29.509 ========= 00:34:29.509 Entry: 0 00:34:29.509 Error Count: 0x3 00:34:29.509 Submission Queue Id: 0x0 00:34:29.509 Command Id: 0x5 00:34:29.509 Phase Bit: 0 00:34:29.509 Status Code: 0x2 00:34:29.509 Status Code Type: 0x0 00:34:29.509 Do Not Retry: 1 00:34:29.509 Error Location: 0x28 00:34:29.509 LBA: 0x0 00:34:29.509 Namespace: 0x0 00:34:29.509 Vendor Log Page: 0x0 00:34:29.509 ----------- 00:34:29.509 Entry: 1 00:34:29.509 Error Count: 0x2 00:34:29.509 Submission Queue Id: 0x0 00:34:29.509 Command Id: 0x5 00:34:29.509 Phase Bit: 0 00:34:29.509 Status Code: 0x2 00:34:29.509 Status Code Type: 0x0 00:34:29.509 Do Not Retry: 1 00:34:29.509 Error Location: 0x28 00:34:29.509 LBA: 0x0 00:34:29.509 Namespace: 0x0 00:34:29.509 Vendor Log Page: 0x0 00:34:29.509 ----------- 00:34:29.509 Entry: 2 00:34:29.509 Error Count: 0x1 00:34:29.509 Submission Queue Id: 0x0 00:34:29.509 Command Id: 0x4 00:34:29.509 Phase Bit: 0 00:34:29.509 Status Code: 0x2 00:34:29.509 Status Code Type: 0x0 00:34:29.509 Do Not Retry: 1 00:34:29.509 Error Location: 0x28 00:34:29.509 LBA: 0x0 00:34:29.509 Namespace: 0x0 00:34:29.509 Vendor Log Page: 0x0 00:34:29.509 00:34:29.509 Number of Queues 00:34:29.509 ================ 00:34:29.509 Number of I/O Submission Queues: 128 00:34:29.509 Number of I/O Completion Queues: 128 00:34:29.509 00:34:29.509 ZNS Specific Controller Data 00:34:29.509 ============================ 00:34:29.509 Zone Append Size Limit: 0 00:34:29.509 00:34:29.509 00:34:29.509 Active Namespaces 00:34:29.509 ================= 00:34:29.509 get_feature(0x05) failed 00:34:29.509 Namespace ID:1 00:34:29.509 Command Set Identifier: NVM (00h) 00:34:29.509 Deallocate: Supported 00:34:29.509 Deallocated/Unwritten Error: Not Supported 00:34:29.509 Deallocated Read Value: Unknown 00:34:29.509 Deallocate in Write Zeroes: Not Supported 00:34:29.509 Deallocated Guard Field: 0xFFFF 00:34:29.509 Flush: Supported 00:34:29.509 Reservation: Not Supported 00:34:29.509 Namespace Sharing Capabilities: Multiple Controllers 00:34:29.509 Size (in LBAs): 1953525168 (931GiB) 00:34:29.509 Capacity (in LBAs): 1953525168 (931GiB) 00:34:29.510 Utilization (in LBAs): 1953525168 (931GiB) 00:34:29.510 UUID: 97301d2f-2ba7-457a-a9f2-ae3edd509154 00:34:29.510 Thin Provisioning: Not Supported 00:34:29.510 Per-NS Atomic Units: Yes 00:34:29.510 Atomic Boundary Size (Normal): 0 00:34:29.510 Atomic Boundary Size (PFail): 0 00:34:29.510 Atomic Boundary Offset: 0 00:34:29.510 NGUID/EUI64 Never Reused: No 00:34:29.510 ANA group ID: 1 00:34:29.510 Namespace Write Protected: No 00:34:29.510 Number of LBA Formats: 1 00:34:29.510 Current LBA Format: LBA Format #00 00:34:29.510 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:29.510 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:29.510 rmmod nvme_tcp 00:34:29.510 rmmod nvme_fabrics 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:29.510 16:40:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:32.043 16:40:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:32.980 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:32.980 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:32.980 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:32.980 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:32.980 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:32.980 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:32.980 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:32.980 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:32.980 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:34:32.980 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:34:32.980 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:34:32.980 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:34:32.980 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:34:32.980 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:34:32.980 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:34:32.980 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:34:33.922 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:34:34.181 00:34:34.181 real 0m9.794s 00:34:34.181 user 0m2.143s 00:34:34.181 sys 0m3.680s 00:34:34.181 16:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:34.181 16:40:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:34.181 ************************************ 00:34:34.181 END TEST nvmf_identify_kernel_target 00:34:34.181 ************************************ 00:34:34.181 16:40:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:34.182 ************************************ 00:34:34.182 START TEST nvmf_auth_host 00:34:34.182 ************************************ 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:34.182 * Looking for test storage... 00:34:34.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:34.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.182 --rc genhtml_branch_coverage=1 00:34:34.182 --rc genhtml_function_coverage=1 00:34:34.182 --rc genhtml_legend=1 00:34:34.182 --rc geninfo_all_blocks=1 00:34:34.182 --rc geninfo_unexecuted_blocks=1 00:34:34.182 00:34:34.182 ' 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:34.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.182 --rc genhtml_branch_coverage=1 00:34:34.182 --rc genhtml_function_coverage=1 00:34:34.182 --rc genhtml_legend=1 00:34:34.182 --rc geninfo_all_blocks=1 00:34:34.182 --rc geninfo_unexecuted_blocks=1 00:34:34.182 00:34:34.182 ' 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:34.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.182 --rc genhtml_branch_coverage=1 00:34:34.182 --rc genhtml_function_coverage=1 00:34:34.182 --rc genhtml_legend=1 00:34:34.182 --rc geninfo_all_blocks=1 00:34:34.182 --rc geninfo_unexecuted_blocks=1 00:34:34.182 00:34:34.182 ' 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:34.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:34.182 --rc genhtml_branch_coverage=1 00:34:34.182 --rc genhtml_function_coverage=1 00:34:34.182 --rc genhtml_legend=1 00:34:34.182 --rc geninfo_all_blocks=1 00:34:34.182 --rc geninfo_unexecuted_blocks=1 00:34:34.182 00:34:34.182 ' 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:34.182 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:34.183 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:34.183 16:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:36.715 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:36.716 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:36.716 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:36.716 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:36.716 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:36.716 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:36.716 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:34:36.716 00:34:36.716 --- 10.0.0.2 ping statistics --- 00:34:36.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.716 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:36.716 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:36.716 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:34:36.716 00:34:36.716 --- 10.0.0.1 ping statistics --- 00:34:36.716 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:36.716 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=383304 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 383304 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 383304 ']' 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:36.716 16:40:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.716 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:36.716 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:36.716 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:36.716 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:36.716 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:36.716 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=eaee6e617d0fb284132cf34c1c67bf9b 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.8U4 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key eaee6e617d0fb284132cf34c1c67bf9b 0 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 eaee6e617d0fb284132cf34c1c67bf9b 0 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=eaee6e617d0fb284132cf34c1c67bf9b 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:36.717 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.8U4 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.8U4 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.8U4 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=b7644e194b86e214c9ab1e24b65f3bd106b3eb4908f4e2fb0e59ebc142128f5b 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.dQ6 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key b7644e194b86e214c9ab1e24b65f3bd106b3eb4908f4e2fb0e59ebc142128f5b 3 00:34:36.975 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 b7644e194b86e214c9ab1e24b65f3bd106b3eb4908f4e2fb0e59ebc142128f5b 3 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=b7644e194b86e214c9ab1e24b65f3bd106b3eb4908f4e2fb0e59ebc142128f5b 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.dQ6 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.dQ6 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.dQ6 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c3cca2668b14c9c9457387278ab5cff032b38666438d48fb 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.FEa 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c3cca2668b14c9c9457387278ab5cff032b38666438d48fb 0 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c3cca2668b14c9c9457387278ab5cff032b38666438d48fb 0 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c3cca2668b14c9c9457387278ab5cff032b38666438d48fb 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.FEa 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.FEa 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.FEa 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cf4286138cb615571803d791ff3436402a818aa2bc2160a1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.Efc 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cf4286138cb615571803d791ff3436402a818aa2bc2160a1 2 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cf4286138cb615571803d791ff3436402a818aa2bc2160a1 2 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cf4286138cb615571803d791ff3436402a818aa2bc2160a1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.Efc 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.Efc 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Efc 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=418c0112f94af03025d7301130a653ae 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.1kj 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 418c0112f94af03025d7301130a653ae 1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 418c0112f94af03025d7301130a653ae 1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=418c0112f94af03025d7301130a653ae 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.1kj 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.1kj 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.1kj 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=13ad1b3cc7de8dd338c2849292af4900 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.srR 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 13ad1b3cc7de8dd338c2849292af4900 1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 13ad1b3cc7de8dd338c2849292af4900 1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=13ad1b3cc7de8dd338c2849292af4900 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:36.976 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.srR 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.srR 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.srR 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=338848ef0e9ef5e653d0b6aebf2db91d697785b6f376dc7a 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.xvI 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 338848ef0e9ef5e653d0b6aebf2db91d697785b6f376dc7a 2 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 338848ef0e9ef5e653d0b6aebf2db91d697785b6f376dc7a 2 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=338848ef0e9ef5e653d0b6aebf2db91d697785b6f376dc7a 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.xvI 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.xvI 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.xvI 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.235 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c3418c372c561a0646f5acc44c357dde 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.x1K 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c3418c372c561a0646f5acc44c357dde 0 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c3418c372c561a0646f5acc44c357dde 0 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c3418c372c561a0646f5acc44c357dde 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.x1K 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.x1K 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.x1K 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=054e89666a7856c91cb2216f831c85dacac1d6a8e029c89bf5cd61df1646b974 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.tqB 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 054e89666a7856c91cb2216f831c85dacac1d6a8e029c89bf5cd61df1646b974 3 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 054e89666a7856c91cb2216f831c85dacac1d6a8e029c89bf5cd61df1646b974 3 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=054e89666a7856c91cb2216f831c85dacac1d6a8e029c89bf5cd61df1646b974 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.tqB 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.tqB 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.tqB 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 383304 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 383304 ']' 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:37.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:37.236 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.494 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:37.494 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.8U4 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.dQ6 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.dQ6 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.FEa 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Efc ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Efc 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.1kj 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.srR ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.srR 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.xvI 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.x1K ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.x1K 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.tqB 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:37.495 16:40:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:38.873 Waiting for block devices as requested 00:34:38.873 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:34:38.873 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:38.873 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:39.132 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:39.132 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:39.132 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:39.390 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:39.390 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:39.390 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:39.390 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:34:39.649 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:34:39.649 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:34:39.649 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:34:39.649 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:34:39.908 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:34:39.908 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:34:39.908 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:40.476 No valid GPT data, bailing 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:34:40.476 00:34:40.476 Discovery Log Number of Records 2, Generation counter 2 00:34:40.476 =====Discovery Log Entry 0====== 00:34:40.476 trtype: tcp 00:34:40.476 adrfam: ipv4 00:34:40.476 subtype: current discovery subsystem 00:34:40.476 treq: not specified, sq flow control disable supported 00:34:40.476 portid: 1 00:34:40.476 trsvcid: 4420 00:34:40.476 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:40.476 traddr: 10.0.0.1 00:34:40.476 eflags: none 00:34:40.476 sectype: none 00:34:40.476 =====Discovery Log Entry 1====== 00:34:40.476 trtype: tcp 00:34:40.476 adrfam: ipv4 00:34:40.476 subtype: nvme subsystem 00:34:40.476 treq: not specified, sq flow control disable supported 00:34:40.476 portid: 1 00:34:40.476 trsvcid: 4420 00:34:40.476 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:40.476 traddr: 10.0.0.1 00:34:40.476 eflags: none 00:34:40.476 sectype: none 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.476 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.477 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.739 nvme0n1 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.739 16:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.739 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.740 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.998 nvme0n1 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:40.998 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.999 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.257 nvme0n1 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.257 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.516 nvme0n1 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:41.516 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.517 nvme0n1 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.517 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.775 16:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.775 nvme0n1 00:34:41.775 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.775 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.775 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.775 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.775 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.775 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.034 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.292 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.550 nvme0n1 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.550 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.808 nvme0n1 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:34:42.808 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.809 16:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.067 nvme0n1 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.067 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.326 nvme0n1 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.326 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 nvme0n1 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.585 16:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.152 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.411 nvme0n1 00:34:44.411 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.411 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.411 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.411 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.411 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.411 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.411 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.411 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.411 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.411 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.669 16:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.927 nvme0n1 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.927 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.928 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.185 nvme0n1 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:34:45.185 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.186 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.443 nvme0n1 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:45.443 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.444 16:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.701 nvme0n1 00:34:45.701 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.701 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.701 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.701 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.701 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.701 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.958 16:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.858 16:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.116 nvme0n1 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.116 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.117 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.683 nvme0n1 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.683 16:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.249 nvme0n1 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.249 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:49.250 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.250 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.815 nvme0n1 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.815 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:49.816 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.816 16:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.073 nvme0n1 00:34:50.073 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.073 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.073 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.073 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.331 16:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.263 nvme0n1 00:34:51.263 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.263 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.264 16:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.197 nvme0n1 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.197 16:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.131 nvme0n1 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.131 16:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.697 nvme0n1 00:34:53.697 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.697 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.697 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.697 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.697 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.697 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:53.954 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.955 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.889 nvme0n1 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.889 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.890 16:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.890 nvme0n1 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.890 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.152 nvme0n1 00:34:55.152 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.153 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.411 nvme0n1 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:55.411 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.412 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.670 nvme0n1 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.670 16:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.929 nvme0n1 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.929 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.187 nvme0n1 00:34:56.187 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.187 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.187 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.187 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.187 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.187 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.187 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.187 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.188 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.446 nvme0n1 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.446 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.704 nvme0n1 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:56.704 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.705 16:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.965 nvme0n1 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.965 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.225 nvme0n1 00:34:57.225 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.225 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.226 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.485 nvme0n1 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.485 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.486 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.744 nvme0n1 00:34:57.744 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.744 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.744 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.744 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.744 16:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.744 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.003 nvme0n1 00:34:58.003 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.003 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.003 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.003 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.003 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.003 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.261 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.521 nvme0n1 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.521 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.781 nvme0n1 00:34:58.781 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.781 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.781 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.781 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.781 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.781 16:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.781 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.781 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.781 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.781 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.781 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.782 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.350 nvme0n1 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.350 16:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.919 nvme0n1 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.919 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.487 nvme0n1 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.487 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.488 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.488 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.488 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.488 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.488 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.488 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.488 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.488 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:00.488 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.488 16:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.058 nvme0n1 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.058 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.627 nvme0n1 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.627 16:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.564 nvme0n1 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.564 16:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.503 nvme0n1 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.503 16:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.439 nvme0n1 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.439 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.440 16:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.446 nvme0n1 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:05.446 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.447 16:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.015 nvme0n1 00:35:06.015 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.015 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.015 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.015 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.015 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.015 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.276 nvme0n1 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.276 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.537 nvme0n1 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.537 16:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.798 nvme0n1 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.798 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.799 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.058 nvme0n1 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.058 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.059 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.059 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.059 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.059 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.319 nvme0n1 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.319 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.320 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.581 nvme0n1 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.581 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.843 nvme0n1 00:35:07.843 16:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.843 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.102 nvme0n1 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.102 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.103 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.362 nvme0n1 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.362 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.622 nvme0n1 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.622 16:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.881 nvme0n1 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.881 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.139 nvme0n1 00:35:09.139 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.139 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.139 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.139 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.139 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.139 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.139 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.139 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.139 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.139 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:09.397 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.398 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.659 nvme0n1 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.659 16:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.920 nvme0n1 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.920 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.921 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.180 nvme0n1 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.180 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.439 16:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.698 nvme0n1 00:35:10.698 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.698 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.698 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.698 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.698 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.698 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:10.957 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.958 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.216 nvme0n1 00:35:11.216 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.216 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.216 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.216 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.216 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.216 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:11.477 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.478 16:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.049 nvme0n1 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.049 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.621 nvme0n1 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.621 16:41:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.881 nvme0n1 00:35:12.881 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.881 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.881 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.881 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.881 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZWFlZTZlNjE3ZDBmYjI4NDEzMmNmMzRjMWM2N2JmOWL8yDzc: 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: ]] 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Yjc2NDRlMTk0Yjg2ZTIxNGM5YWIxZTI0YjY1ZjNiZDEwNmIzZWI0OTA4ZjRlMmZiMGU1OWViYzE0MjEyOGY1YgNpixs=: 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.142 16:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.086 nvme0n1 00:35:14.086 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.086 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.086 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.086 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.086 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.086 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.086 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.086 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.087 16:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.027 nvme0n1 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.027 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.596 nvme0n1 00:35:15.596 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.596 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.596 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.596 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.596 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzM4ODQ4ZWYwZTllZjVlNjUzZDBiNmFlYmYyZGI5MWQ2OTc3ODViNmYzNzZkYzdh+pynJA==: 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: ]] 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzM0MThjMzcyYzU2MWEwNjQ2ZjVhY2M0NGMzNTdkZGWuXWg1: 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.857 16:41:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.798 nvme0n1 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.798 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MDU0ZTg5NjY2YTc4NTZjOTFjYjIyMTZmODMxYzg1ZGFjYWMxZDZhOGUwMjljODliZjVjZDYxZGYxNjQ2Yjk3NDiSfYg=: 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.799 16:41:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.737 nvme0n1 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.737 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.738 request: 00:35:17.738 { 00:35:17.738 "name": "nvme0", 00:35:17.738 "trtype": "tcp", 00:35:17.738 "traddr": "10.0.0.1", 00:35:17.738 "adrfam": "ipv4", 00:35:17.738 "trsvcid": "4420", 00:35:17.738 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:17.738 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:17.738 "prchk_reftag": false, 00:35:17.738 "prchk_guard": false, 00:35:17.738 "hdgst": false, 00:35:17.738 "ddgst": false, 00:35:17.738 "allow_unrecognized_csi": false, 00:35:17.738 "method": "bdev_nvme_attach_controller", 00:35:17.738 "req_id": 1 00:35:17.738 } 00:35:17.738 Got JSON-RPC error response 00:35:17.738 response: 00:35:17.738 { 00:35:17.738 "code": -5, 00:35:17.738 "message": "Input/output error" 00:35:17.738 } 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.738 16:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.738 request: 00:35:17.738 { 00:35:17.738 "name": "nvme0", 00:35:17.738 "trtype": "tcp", 00:35:17.738 "traddr": "10.0.0.1", 00:35:17.738 "adrfam": "ipv4", 00:35:17.738 "trsvcid": "4420", 00:35:17.738 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:17.738 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:17.738 "prchk_reftag": false, 00:35:17.738 "prchk_guard": false, 00:35:17.738 "hdgst": false, 00:35:17.738 "ddgst": false, 00:35:17.738 "dhchap_key": "key2", 00:35:17.738 "allow_unrecognized_csi": false, 00:35:17.738 "method": "bdev_nvme_attach_controller", 00:35:17.738 "req_id": 1 00:35:17.738 } 00:35:17.738 Got JSON-RPC error response 00:35:17.738 response: 00:35:17.738 { 00:35:17.738 "code": -5, 00:35:17.738 "message": "Input/output error" 00:35:17.738 } 00:35:17.738 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:17.738 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:17.738 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:17.738 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:17.738 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:17.738 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.738 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:17.738 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.738 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.738 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.000 request: 00:35:18.000 { 00:35:18.000 "name": "nvme0", 00:35:18.000 "trtype": "tcp", 00:35:18.000 "traddr": "10.0.0.1", 00:35:18.000 "adrfam": "ipv4", 00:35:18.000 "trsvcid": "4420", 00:35:18.000 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:18.000 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:18.000 "prchk_reftag": false, 00:35:18.000 "prchk_guard": false, 00:35:18.000 "hdgst": false, 00:35:18.000 "ddgst": false, 00:35:18.000 "dhchap_key": "key1", 00:35:18.000 "dhchap_ctrlr_key": "ckey2", 00:35:18.000 "allow_unrecognized_csi": false, 00:35:18.000 "method": "bdev_nvme_attach_controller", 00:35:18.000 "req_id": 1 00:35:18.000 } 00:35:18.000 Got JSON-RPC error response 00:35:18.000 response: 00:35:18.000 { 00:35:18.000 "code": -5, 00:35:18.000 "message": "Input/output error" 00:35:18.000 } 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.000 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.001 nvme0n1 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.001 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.262 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.262 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.262 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.263 request: 00:35:18.263 { 00:35:18.263 "name": "nvme0", 00:35:18.263 "dhchap_key": "key1", 00:35:18.263 "dhchap_ctrlr_key": "ckey2", 00:35:18.263 "method": "bdev_nvme_set_keys", 00:35:18.263 "req_id": 1 00:35:18.263 } 00:35:18.263 Got JSON-RPC error response 00:35:18.263 response: 00:35:18.263 { 00:35:18.263 "code": -13, 00:35:18.263 "message": "Permission denied" 00:35:18.263 } 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:18.263 16:41:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:19.642 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzNjY2EyNjY4YjE0YzljOTQ1NzM4NzI3OGFiNWNmZjAzMmIzODY2NjQzOGQ0OGZiZRTjgQ==: 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: ]] 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:Y2Y0Mjg2MTM4Y2I2MTU1NzE4MDNkNzkxZmYzNDM2NDAyYTgxOGFhMmJjMjE2MGExYeL7Kw==: 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.643 nvme0n1 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDE4YzAxMTJmOTRhZjAzMDI1ZDczMDExMzBhNjUzYWWdnTot: 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: ]] 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MTNhZDFiM2NjN2RlOGRkMzM4YzI4NDkyOTJhZjQ5MDA5Isjo: 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.643 request: 00:35:19.643 { 00:35:19.643 "name": "nvme0", 00:35:19.643 "dhchap_key": "key2", 00:35:19.643 "dhchap_ctrlr_key": "ckey1", 00:35:19.643 "method": "bdev_nvme_set_keys", 00:35:19.643 "req_id": 1 00:35:19.643 } 00:35:19.643 Got JSON-RPC error response 00:35:19.643 response: 00:35:19.643 { 00:35:19.643 "code": -13, 00:35:19.643 "message": "Permission denied" 00:35:19.643 } 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:19.643 16:41:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:20.581 rmmod nvme_tcp 00:35:20.581 rmmod nvme_fabrics 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 383304 ']' 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 383304 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 383304 ']' 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 383304 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:20.581 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.840 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 383304 00:35:20.840 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:20.840 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:20.840 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 383304' 00:35:20.840 killing process with pid 383304 00:35:20.840 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 383304 00:35:20.840 16:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 383304 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:20.840 16:41:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:23.376 16:41:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:24.312 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:24.312 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:24.312 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:24.312 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:24.312 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:24.312 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:24.312 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:24.312 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:24.312 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:35:24.312 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:35:24.312 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:35:24.312 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:35:24.312 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:35:24.312 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:35:24.312 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:35:24.312 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:35:25.253 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:35:25.512 16:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.8U4 /tmp/spdk.key-null.FEa /tmp/spdk.key-sha256.1kj /tmp/spdk.key-sha384.xvI /tmp/spdk.key-sha512.tqB /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:25.512 16:41:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:26.891 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:26.891 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:26.891 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:26.891 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:26.891 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:26.891 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:26.891 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:26.891 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:26.891 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:26.891 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:26.891 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:26.891 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:26.891 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:26.891 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:26.891 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:26.891 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:26.891 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:26.891 00:35:26.891 real 0m52.700s 00:35:26.891 user 0m50.196s 00:35:26.891 sys 0m6.220s 00:35:26.891 16:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:26.891 16:41:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.891 ************************************ 00:35:26.891 END TEST nvmf_auth_host 00:35:26.891 ************************************ 00:35:26.891 16:41:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:26.891 16:41:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:26.891 16:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:26.891 16:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:26.891 16:41:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.891 ************************************ 00:35:26.891 START TEST nvmf_digest 00:35:26.891 ************************************ 00:35:26.891 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:26.891 * Looking for test storage... 00:35:26.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:26.891 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:26.891 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:26.892 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:27.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.153 --rc genhtml_branch_coverage=1 00:35:27.153 --rc genhtml_function_coverage=1 00:35:27.153 --rc genhtml_legend=1 00:35:27.153 --rc geninfo_all_blocks=1 00:35:27.153 --rc geninfo_unexecuted_blocks=1 00:35:27.153 00:35:27.153 ' 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:27.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.153 --rc genhtml_branch_coverage=1 00:35:27.153 --rc genhtml_function_coverage=1 00:35:27.153 --rc genhtml_legend=1 00:35:27.153 --rc geninfo_all_blocks=1 00:35:27.153 --rc geninfo_unexecuted_blocks=1 00:35:27.153 00:35:27.153 ' 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:27.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.153 --rc genhtml_branch_coverage=1 00:35:27.153 --rc genhtml_function_coverage=1 00:35:27.153 --rc genhtml_legend=1 00:35:27.153 --rc geninfo_all_blocks=1 00:35:27.153 --rc geninfo_unexecuted_blocks=1 00:35:27.153 00:35:27.153 ' 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:27.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:27.153 --rc genhtml_branch_coverage=1 00:35:27.153 --rc genhtml_function_coverage=1 00:35:27.153 --rc genhtml_legend=1 00:35:27.153 --rc geninfo_all_blocks=1 00:35:27.153 --rc geninfo_unexecuted_blocks=1 00:35:27.153 00:35:27.153 ' 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:27.153 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:27.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:27.154 16:41:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:29.061 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:29.061 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:29.061 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:29.061 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:29.061 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:29.062 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:29.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:29.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.318 ms 00:35:29.320 00:35:29.320 --- 10.0.0.2 ping statistics --- 00:35:29.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.320 rtt min/avg/max/mdev = 0.318/0.318/0.318/0.000 ms 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:29.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:29.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:35:29.320 00:35:29.320 --- 10.0.0.1 ping statistics --- 00:35:29.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:29.320 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:29.320 ************************************ 00:35:29.320 START TEST nvmf_digest_clean 00:35:29.320 ************************************ 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:29.320 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:29.321 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=393052 00:35:29.321 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:29.321 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 393052 00:35:29.321 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 393052 ']' 00:35:29.321 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:29.321 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.321 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:29.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:29.321 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.321 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:29.321 [2024-11-19 16:41:19.605896] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:35:29.321 [2024-11-19 16:41:19.605980] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:29.579 [2024-11-19 16:41:19.676824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.579 [2024-11-19 16:41:19.718717] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:29.579 [2024-11-19 16:41:19.718771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:29.579 [2024-11-19 16:41:19.718800] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:29.579 [2024-11-19 16:41:19.718812] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:29.579 [2024-11-19 16:41:19.718821] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:29.579 [2024-11-19 16:41:19.719411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:29.579 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.579 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:29.579 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:29.579 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:29.579 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:29.579 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:29.579 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:29.579 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:29.579 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:29.579 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:29.580 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:29.838 null0 00:35:29.838 [2024-11-19 16:41:19.958528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:29.838 [2024-11-19 16:41:19.982772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=393071 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 393071 /var/tmp/bperf.sock 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 393071 ']' 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:29.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:29.838 16:41:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:29.838 [2024-11-19 16:41:20.031888] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:35:29.838 [2024-11-19 16:41:20.031988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393071 ] 00:35:29.838 [2024-11-19 16:41:20.120081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:30.097 [2024-11-19 16:41:20.173856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:30.097 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:30.097 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:30.097 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:30.097 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:30.097 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:30.356 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:30.356 16:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:30.922 nvme0n1 00:35:30.922 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:30.922 16:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:30.922 Running I/O for 2 seconds... 00:35:33.232 17957.00 IOPS, 70.14 MiB/s [2024-11-19T15:41:23.571Z] 18341.00 IOPS, 71.64 MiB/s 00:35:33.232 Latency(us) 00:35:33.232 [2024-11-19T15:41:23.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.232 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:33.232 nvme0n1 : 2.01 18356.88 71.71 0.00 0.00 6966.27 3446.71 21651.15 00:35:33.232 [2024-11-19T15:41:23.571Z] =================================================================================================================== 00:35:33.232 [2024-11-19T15:41:23.571Z] Total : 18356.88 71.71 0.00 0.00 6966.27 3446.71 21651.15 00:35:33.232 { 00:35:33.232 "results": [ 00:35:33.232 { 00:35:33.232 "job": "nvme0n1", 00:35:33.232 "core_mask": "0x2", 00:35:33.232 "workload": "randread", 00:35:33.232 "status": "finished", 00:35:33.232 "queue_depth": 128, 00:35:33.232 "io_size": 4096, 00:35:33.232 "runtime": 2.005243, 00:35:33.232 "iops": 18356.8774457759, 00:35:33.232 "mibps": 71.7065525225621, 00:35:33.232 "io_failed": 0, 00:35:33.232 "io_timeout": 0, 00:35:33.232 "avg_latency_us": 6966.273967903247, 00:35:33.232 "min_latency_us": 3446.708148148148, 00:35:33.232 "max_latency_us": 21651.152592592593 00:35:33.232 } 00:35:33.232 ], 00:35:33.232 "core_count": 1 00:35:33.232 } 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:33.232 | select(.opcode=="crc32c") 00:35:33.232 | "\(.module_name) \(.executed)"' 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 393071 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 393071 ']' 00:35:33.232 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 393071 00:35:33.233 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:33.233 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:33.233 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393071 00:35:33.233 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:33.233 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:33.233 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393071' 00:35:33.233 killing process with pid 393071 00:35:33.233 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 393071 00:35:33.233 Received shutdown signal, test time was about 2.000000 seconds 00:35:33.233 00:35:33.233 Latency(us) 00:35:33.233 [2024-11-19T15:41:23.572Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:33.233 [2024-11-19T15:41:23.572Z] =================================================================================================================== 00:35:33.233 [2024-11-19T15:41:23.572Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:33.233 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 393071 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=393596 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 393596 /var/tmp/bperf.sock 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 393596 ']' 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:33.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.491 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:33.491 [2024-11-19 16:41:23.759337] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:35:33.491 [2024-11-19 16:41:23.759443] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid393596 ] 00:35:33.491 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:33.491 Zero copy mechanism will not be used. 00:35:33.750 [2024-11-19 16:41:23.827312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:33.750 [2024-11-19 16:41:23.873284] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.750 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.750 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:33.750 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:33.750 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:33.750 16:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:34.009 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:34.009 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:34.576 nvme0n1 00:35:34.576 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:34.576 16:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:34.576 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:34.576 Zero copy mechanism will not be used. 00:35:34.576 Running I/O for 2 seconds... 00:35:36.887 5790.00 IOPS, 723.75 MiB/s [2024-11-19T15:41:27.226Z] 5894.50 IOPS, 736.81 MiB/s 00:35:36.887 Latency(us) 00:35:36.887 [2024-11-19T15:41:27.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.887 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:36.887 nvme0n1 : 2.00 5892.16 736.52 0.00 0.00 2711.30 703.91 8252.68 00:35:36.887 [2024-11-19T15:41:27.226Z] =================================================================================================================== 00:35:36.887 [2024-11-19T15:41:27.226Z] Total : 5892.16 736.52 0.00 0.00 2711.30 703.91 8252.68 00:35:36.887 { 00:35:36.887 "results": [ 00:35:36.887 { 00:35:36.887 "job": "nvme0n1", 00:35:36.887 "core_mask": "0x2", 00:35:36.887 "workload": "randread", 00:35:36.887 "status": "finished", 00:35:36.887 "queue_depth": 16, 00:35:36.887 "io_size": 131072, 00:35:36.887 "runtime": 2.003511, 00:35:36.887 "iops": 5892.156319580976, 00:35:36.887 "mibps": 736.519539947622, 00:35:36.887 "io_failed": 0, 00:35:36.887 "io_timeout": 0, 00:35:36.887 "avg_latency_us": 2711.2962690636423, 00:35:36.887 "min_latency_us": 703.9051851851851, 00:35:36.887 "max_latency_us": 8252.68148148148 00:35:36.887 } 00:35:36.887 ], 00:35:36.887 "core_count": 1 00:35:36.887 } 00:35:36.887 16:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:36.887 16:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:36.887 16:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:36.887 16:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:36.887 | select(.opcode=="crc32c") 00:35:36.887 | "\(.module_name) \(.executed)"' 00:35:36.887 16:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 393596 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 393596 ']' 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 393596 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393596 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393596' 00:35:36.887 killing process with pid 393596 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 393596 00:35:36.887 Received shutdown signal, test time was about 2.000000 seconds 00:35:36.887 00:35:36.887 Latency(us) 00:35:36.887 [2024-11-19T15:41:27.226Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:36.887 [2024-11-19T15:41:27.226Z] =================================================================================================================== 00:35:36.887 [2024-11-19T15:41:27.226Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:36.887 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 393596 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=394004 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 394004 /var/tmp/bperf.sock 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394004 ']' 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:37.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:37.145 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:37.145 [2024-11-19 16:41:27.432145] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:35:37.145 [2024-11-19 16:41:27.432237] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394004 ] 00:35:37.401 [2024-11-19 16:41:27.497275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:37.401 [2024-11-19 16:41:27.541977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:37.401 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:37.401 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:37.401 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:37.401 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:37.401 16:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:37.967 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:37.967 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:38.226 nvme0n1 00:35:38.226 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:38.226 16:41:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:38.484 Running I/O for 2 seconds... 00:35:40.353 18245.00 IOPS, 71.27 MiB/s [2024-11-19T15:41:30.692Z] 18402.50 IOPS, 71.88 MiB/s 00:35:40.353 Latency(us) 00:35:40.353 [2024-11-19T15:41:30.692Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.353 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:40.353 nvme0n1 : 2.01 18406.54 71.90 0.00 0.00 6937.91 5242.88 17087.91 00:35:40.353 [2024-11-19T15:41:30.692Z] =================================================================================================================== 00:35:40.353 [2024-11-19T15:41:30.692Z] Total : 18406.54 71.90 0.00 0.00 6937.91 5242.88 17087.91 00:35:40.353 { 00:35:40.353 "results": [ 00:35:40.353 { 00:35:40.353 "job": "nvme0n1", 00:35:40.353 "core_mask": "0x2", 00:35:40.353 "workload": "randwrite", 00:35:40.353 "status": "finished", 00:35:40.353 "queue_depth": 128, 00:35:40.353 "io_size": 4096, 00:35:40.353 "runtime": 2.009123, 00:35:40.353 "iops": 18406.53857429336, 00:35:40.353 "mibps": 71.90054130583344, 00:35:40.353 "io_failed": 0, 00:35:40.353 "io_timeout": 0, 00:35:40.353 "avg_latency_us": 6937.907516792908, 00:35:40.353 "min_latency_us": 5242.88, 00:35:40.353 "max_latency_us": 17087.905185185184 00:35:40.353 } 00:35:40.353 ], 00:35:40.353 "core_count": 1 00:35:40.353 } 00:35:40.353 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:40.353 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:40.353 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:40.353 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:40.353 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:40.353 | select(.opcode=="crc32c") 00:35:40.353 | "\(.module_name) \(.executed)"' 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 394004 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394004 ']' 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394004 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394004 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394004' 00:35:40.612 killing process with pid 394004 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394004 00:35:40.612 Received shutdown signal, test time was about 2.000000 seconds 00:35:40.612 00:35:40.612 Latency(us) 00:35:40.612 [2024-11-19T15:41:30.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:40.612 [2024-11-19T15:41:30.951Z] =================================================================================================================== 00:35:40.612 [2024-11-19T15:41:30.951Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:40.612 16:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394004 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=394410 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 394410 /var/tmp/bperf.sock 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 394410 ']' 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:40.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:40.871 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:40.871 [2024-11-19 16:41:31.187994] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:35:40.871 [2024-11-19 16:41:31.188090] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394410 ] 00:35:40.871 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:40.871 Zero copy mechanism will not be used. 00:35:41.129 [2024-11-19 16:41:31.253544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:41.129 [2024-11-19 16:41:31.297750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:41.129 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:41.129 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:41.129 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:41.129 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:41.129 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:41.707 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:41.707 16:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:41.965 nvme0n1 00:35:41.965 16:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:41.965 16:41:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:41.965 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:41.965 Zero copy mechanism will not be used. 00:35:41.965 Running I/O for 2 seconds... 00:35:44.276 5583.00 IOPS, 697.88 MiB/s [2024-11-19T15:41:34.615Z] 5512.00 IOPS, 689.00 MiB/s 00:35:44.276 Latency(us) 00:35:44.276 [2024-11-19T15:41:34.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.276 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:44.276 nvme0n1 : 2.00 5507.03 688.38 0.00 0.00 2897.61 2196.67 8107.05 00:35:44.276 [2024-11-19T15:41:34.615Z] =================================================================================================================== 00:35:44.276 [2024-11-19T15:41:34.615Z] Total : 5507.03 688.38 0.00 0.00 2897.61 2196.67 8107.05 00:35:44.276 { 00:35:44.276 "results": [ 00:35:44.276 { 00:35:44.276 "job": "nvme0n1", 00:35:44.276 "core_mask": "0x2", 00:35:44.276 "workload": "randwrite", 00:35:44.276 "status": "finished", 00:35:44.276 "queue_depth": 16, 00:35:44.276 "io_size": 131072, 00:35:44.276 "runtime": 2.004892, 00:35:44.276 "iops": 5507.029805096733, 00:35:44.276 "mibps": 688.3787256370916, 00:35:44.276 "io_failed": 0, 00:35:44.276 "io_timeout": 0, 00:35:44.276 "avg_latency_us": 2897.6130133140114, 00:35:44.276 "min_latency_us": 2196.6696296296295, 00:35:44.276 "max_latency_us": 8107.045925925926 00:35:44.276 } 00:35:44.276 ], 00:35:44.276 "core_count": 1 00:35:44.276 } 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:44.276 | select(.opcode=="crc32c") 00:35:44.276 | "\(.module_name) \(.executed)"' 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 394410 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 394410 ']' 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 394410 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394410 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394410' 00:35:44.276 killing process with pid 394410 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 394410 00:35:44.276 Received shutdown signal, test time was about 2.000000 seconds 00:35:44.276 00:35:44.276 Latency(us) 00:35:44.276 [2024-11-19T15:41:34.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:44.276 [2024-11-19T15:41:34.615Z] =================================================================================================================== 00:35:44.276 [2024-11-19T15:41:34.615Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:44.276 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 394410 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 393052 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 393052 ']' 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 393052 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 393052 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 393052' 00:35:44.535 killing process with pid 393052 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 393052 00:35:44.535 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 393052 00:35:44.793 00:35:44.794 real 0m15.422s 00:35:44.794 user 0m30.717s 00:35:44.794 sys 0m4.406s 00:35:44.794 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:44.794 16:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:44.794 ************************************ 00:35:44.794 END TEST nvmf_digest_clean 00:35:44.794 ************************************ 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:44.794 ************************************ 00:35:44.794 START TEST nvmf_digest_error 00:35:44.794 ************************************ 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=394960 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 394960 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 394960 ']' 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:44.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:44.794 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:44.794 [2024-11-19 16:41:35.085310] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:35:44.794 [2024-11-19 16:41:35.085413] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.053 [2024-11-19 16:41:35.160082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.053 [2024-11-19 16:41:35.206825] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.053 [2024-11-19 16:41:35.206895] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.053 [2024-11-19 16:41:35.206922] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.053 [2024-11-19 16:41:35.206935] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.053 [2024-11-19 16:41:35.206944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.053 [2024-11-19 16:41:35.207509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:45.053 [2024-11-19 16:41:35.344260] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.053 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:45.312 null0 00:35:45.312 [2024-11-19 16:41:35.459169] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.312 [2024-11-19 16:41:35.483446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=394987 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 394987 /var/tmp/bperf.sock 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 394987 ']' 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:45.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.312 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:45.312 [2024-11-19 16:41:35.529583] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:35:45.312 [2024-11-19 16:41:35.529657] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid394987 ] 00:35:45.312 [2024-11-19 16:41:35.594009] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.312 [2024-11-19 16:41:35.638757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.571 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.571 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:45.571 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:45.571 16:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:45.830 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:45.830 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.830 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:45.830 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.830 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:45.830 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:46.397 nvme0n1 00:35:46.397 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:46.397 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.397 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:46.397 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.397 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:46.397 16:41:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:46.397 Running I/O for 2 seconds... 00:35:46.397 [2024-11-19 16:41:36.569846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.569906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.569925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.585115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.585145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.585177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.599938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.599968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.599999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.612925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.612954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.612984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.627476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.627515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.627548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.644814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.644842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.644871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.658757] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.658787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:20349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.658819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.670364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.670406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.670421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.685008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.685037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.685076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.701310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.701339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.701371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.715853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.715882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.715915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.397 [2024-11-19 16:41:36.730377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.397 [2024-11-19 16:41:36.730423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.397 [2024-11-19 16:41:36.730440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.747002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.747050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.747066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.758990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.759034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.759051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.774665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.774704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.774736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.786650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.786678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.786709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.802299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.802329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5612 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.802361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.816239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.816269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.816302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.828494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.828539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.828557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.843000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.843030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:25500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.843062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.853568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.853598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.853631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.869377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.869421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:23918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.869447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.884957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.884987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16304 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.885019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.901017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.901046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:2649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.901061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.916628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.916671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.916686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.932095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.932124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:21167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.932156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.947921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.657 [2024-11-19 16:41:36.947951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25185 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.657 [2024-11-19 16:41:36.947991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.657 [2024-11-19 16:41:36.960038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.658 [2024-11-19 16:41:36.960089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.658 [2024-11-19 16:41:36.960106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.658 [2024-11-19 16:41:36.972974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.658 [2024-11-19 16:41:36.973001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6523 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.658 [2024-11-19 16:41:36.973031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.658 [2024-11-19 16:41:36.987245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.658 [2024-11-19 16:41:36.987291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.658 [2024-11-19 16:41:36.987307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.004214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.004249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.004280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.017303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.017333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.017383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.028885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.028912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.028942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.045026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.045054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:8434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.045093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.061690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.061719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:19587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.061736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.076381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.076410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:24471 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.076451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.089604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.089633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:9202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.089649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.102465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.102511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.102529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.114491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.114520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:24067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.114554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.127902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.127931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.127947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.141437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.141466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:22795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.141496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.155917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.155946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.155978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.167189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.167217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.167247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.181513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.181542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.181573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.197781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.197809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.197840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.210884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.210912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.210943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.227711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.227739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.227769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:46.917 [2024-11-19 16:41:37.244031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:46.917 [2024-11-19 16:41:37.244060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.917 [2024-11-19 16:41:37.244109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.260215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.260243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.260275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.273733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.273764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.273795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.284551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.284579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:12554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.284609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.298587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.298616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:12945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.298646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.309536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.309564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.309594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.323451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.323480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.323511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.336984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.337033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:11350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.337049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.348080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.348109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.348139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.364549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.364600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.364618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.379782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.379809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.379841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.396341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.396369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.396399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.408527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.408570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.408586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.421257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.421303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.421320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.436091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.436135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.436151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.451333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.451364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.451380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.464676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.464706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:1877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.464738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.476923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.476952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:4938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.476983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.489463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.489510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.489527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.177 [2024-11-19 16:41:37.501472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.177 [2024-11-19 16:41:37.501499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.177 [2024-11-19 16:41:37.501530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.517781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.517827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.517843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.534188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.534216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:4481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.534247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.544164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.544191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.544222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 17977.00 IOPS, 70.22 MiB/s [2024-11-19T15:41:37.776Z] [2024-11-19 16:41:37.562602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.562631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.562662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.575445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.575473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.575505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.588769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.588799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.588832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.601785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.601821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.601853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.614433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.614464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11773 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.614495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.625975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.626002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.626031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.639224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.639253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.639284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.655794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.655821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.655852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.666341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.666369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:11023 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.666384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.680690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.680718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21865 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.680750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.694508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.694553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.694569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.706768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.706811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:19665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.706827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.720956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.721006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.721023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.732386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.732430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:20739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.732446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.745801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.745828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.745858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.437 [2024-11-19 16:41:37.761843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.437 [2024-11-19 16:41:37.761871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.437 [2024-11-19 16:41:37.761902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.772395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.772425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.772442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.786917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.786945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.786975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.802958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.802986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13757 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.803016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.818677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.818706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.818737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.832616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.832646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.832690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.846425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.846477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.846495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.859326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.859358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:19027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.859413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.869971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.870002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:1981 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.870033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.885971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.886009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:4249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.886040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.899715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.899744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.899776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.913669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.913696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.913727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.927708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.927751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:7435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.927767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.940898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.940942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:10129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.940958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.952697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.952733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.952770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.967333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.967361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.967391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.981591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.981619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.981649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:37.997211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:37.997240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:37.997273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:38.013491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:38.013519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:38.013549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.697 [2024-11-19 16:41:38.028647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.697 [2024-11-19 16:41:38.028676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.697 [2024-11-19 16:41:38.028714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.956 [2024-11-19 16:41:38.044231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.956 [2024-11-19 16:41:38.044275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.956 [2024-11-19 16:41:38.044293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.956 [2024-11-19 16:41:38.060187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.956 [2024-11-19 16:41:38.060218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:24887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.956 [2024-11-19 16:41:38.060250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.956 [2024-11-19 16:41:38.070356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.956 [2024-11-19 16:41:38.070384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.956 [2024-11-19 16:41:38.070400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.086034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.086062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.086101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.100289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.100317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.100348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.117190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.117221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.117238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.130471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.130500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.130532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.141959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.141986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.142017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.157179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.157207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.157237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.169747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.169774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:14075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.169805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.182732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.182762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.182793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.195343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.195393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.195416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.208994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.209021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:18243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.209052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.220810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.220838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.220869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.236314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.236343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.236359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.251211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.251240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:1434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.251271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.266644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.266672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.266703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:47.957 [2024-11-19 16:41:38.277770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:47.957 [2024-11-19 16:41:38.277798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:47.957 [2024-11-19 16:41:38.277829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.216 [2024-11-19 16:41:38.293741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.216 [2024-11-19 16:41:38.293770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.216 [2024-11-19 16:41:38.293800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.216 [2024-11-19 16:41:38.308856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.216 [2024-11-19 16:41:38.308900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.216 [2024-11-19 16:41:38.308918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.216 [2024-11-19 16:41:38.323485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.216 [2024-11-19 16:41:38.323529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.216 [2024-11-19 16:41:38.323547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.216 [2024-11-19 16:41:38.336482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.216 [2024-11-19 16:41:38.336526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.216 [2024-11-19 16:41:38.336541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.216 [2024-11-19 16:41:38.347818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.347846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.347876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.362705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.362736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:12921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.362753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.376491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.376521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.376553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.393108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.393140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:9933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.393162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.405148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.405188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.405220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.421483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.421512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.421544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.436309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.436340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:23647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.436378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.447761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.447810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:11685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.447826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.462586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.462616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:13315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.462648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.477106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.477137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.477154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.488363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.488405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.488421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.503422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.503464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:8787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.503480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.517341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.517388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.517406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.530138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.530169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.530187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.217 [2024-11-19 16:41:38.541172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.217 [2024-11-19 16:41:38.541215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.217 [2024-11-19 16:41:38.541231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.476 [2024-11-19 16:41:38.555167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xce73f0) 00:35:48.476 [2024-11-19 16:41:38.555203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:48.476 [2024-11-19 16:41:38.555221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.476 18251.50 IOPS, 71.29 MiB/s 00:35:48.476 Latency(us) 00:35:48.476 [2024-11-19T15:41:38.815Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.476 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:48.476 nvme0n1 : 2.01 18259.32 71.33 0.00 0.00 7002.23 3568.07 21651.15 00:35:48.476 [2024-11-19T15:41:38.815Z] =================================================================================================================== 00:35:48.476 [2024-11-19T15:41:38.815Z] Total : 18259.32 71.33 0.00 0.00 7002.23 3568.07 21651.15 00:35:48.476 { 00:35:48.476 "results": [ 00:35:48.476 { 00:35:48.476 "job": "nvme0n1", 00:35:48.476 "core_mask": "0x2", 00:35:48.476 "workload": "randread", 00:35:48.476 "status": "finished", 00:35:48.476 "queue_depth": 128, 00:35:48.476 "io_size": 4096, 00:35:48.476 "runtime": 2.006154, 00:35:48.476 "iops": 18259.316084408274, 00:35:48.476 "mibps": 71.32545345471982, 00:35:48.476 "io_failed": 0, 00:35:48.476 "io_timeout": 0, 00:35:48.476 "avg_latency_us": 7002.225178309811, 00:35:48.476 "min_latency_us": 3568.071111111111, 00:35:48.477 "max_latency_us": 21651.152592592593 00:35:48.477 } 00:35:48.477 ], 00:35:48.477 "core_count": 1 00:35:48.477 } 00:35:48.477 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:48.477 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:48.477 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:48.477 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:48.477 | .driver_specific 00:35:48.477 | .nvme_error 00:35:48.477 | .status_code 00:35:48.477 | .command_transient_transport_error' 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 394987 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 394987 ']' 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 394987 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394987 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394987' 00:35:48.736 killing process with pid 394987 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 394987 00:35:48.736 Received shutdown signal, test time was about 2.000000 seconds 00:35:48.736 00:35:48.736 Latency(us) 00:35:48.736 [2024-11-19T15:41:39.075Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.736 [2024-11-19T15:41:39.075Z] =================================================================================================================== 00:35:48.736 [2024-11-19T15:41:39.075Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:48.736 16:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 394987 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=395397 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 395397 /var/tmp/bperf.sock 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 395397 ']' 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:48.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:48.736 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:48.995 [2024-11-19 16:41:39.113713] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:35:48.995 [2024-11-19 16:41:39.113793] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395397 ] 00:35:48.995 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:48.995 Zero copy mechanism will not be used. 00:35:48.995 [2024-11-19 16:41:39.178832] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:48.995 [2024-11-19 16:41:39.221881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.253 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:49.253 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:49.253 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:49.253 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:49.511 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:49.511 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.511 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:49.511 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.511 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.511 16:41:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.770 nvme0n1 00:35:49.770 16:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:49.770 16:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:49.770 16:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:49.770 16:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:49.770 16:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:49.770 16:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:50.029 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:50.029 Zero copy mechanism will not be used. 00:35:50.029 Running I/O for 2 seconds... 00:35:50.029 [2024-11-19 16:41:40.214478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.029 [2024-11-19 16:41:40.214527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.029 [2024-11-19 16:41:40.214547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.029 [2024-11-19 16:41:40.219824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.029 [2024-11-19 16:41:40.219860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.029 [2024-11-19 16:41:40.219878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.225092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.225124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.225142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.229128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.229163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.229181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.232685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.232716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.232736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.237435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.237470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.237488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.242788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.242819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.242836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.247503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.247534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.247551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.252317] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.252348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.252366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.257306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.257336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.257354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.263149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.263179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.263196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.270805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.270838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.270856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.277443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.277479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.277497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.284023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.284054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.284079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.290398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.290430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.290447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.295817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.295848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.295871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.301215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.301247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.301264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.306540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.306572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.306589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.311055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.311093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.311111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.315682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.315713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.315730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.320325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.320356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.320372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.325017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.325047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.325064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.329687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.329717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.329734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.334351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.334381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.334398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.339166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.339202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.339219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.343826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.343856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.343873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.348468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.348499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.348516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.353139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.030 [2024-11-19 16:41:40.353170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.030 [2024-11-19 16:41:40.353187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.030 [2024-11-19 16:41:40.357823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.031 [2024-11-19 16:41:40.357853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.031 [2024-11-19 16:41:40.357870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.031 [2024-11-19 16:41:40.362405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.031 [2024-11-19 16:41:40.362436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.031 [2024-11-19 16:41:40.362453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.367169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.367204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.367221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.371913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.371943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.371960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.376716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.376747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.376764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.381611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.381642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.381660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.386341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.386370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.386387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.390942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.390972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.390988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.395640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.395670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.395687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.400268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.400299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.400316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.404891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.404921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.404938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.410057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.410098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.410116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.413996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.414037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.291 [2024-11-19 16:41:40.414055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.291 [2024-11-19 16:41:40.417795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.291 [2024-11-19 16:41:40.417824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.417862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.422641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.422671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.422688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.427343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.427379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.427397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.433042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.433080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.433099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.438088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.438117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.438135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.442869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.442899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.442916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.447606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.447666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.447684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.453294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.453325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.453343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.459102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.459132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.459165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.466215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.466245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.466277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.471373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.471404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.471422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.476835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.476865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.476897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.482311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.482343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.482375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.487193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.487238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.487255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.492989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.493032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.493048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.497640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.497671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.497688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.502210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.502240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.502257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.507167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.507198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.507221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.512542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.512585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.512606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.517898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.517928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.517945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.522890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.522920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.522937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.527523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.527553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.527569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.532341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.532386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.532403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.538012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.538043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.538059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.543276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.543306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.543323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.549890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.549937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.549955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.292 [2024-11-19 16:41:40.557617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.292 [2024-11-19 16:41:40.557666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.292 [2024-11-19 16:41:40.557683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.563734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.563781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.563798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.569175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.569207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.569224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.574951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.574983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.575000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.581046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.581097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.581116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.587323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.587355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.587372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.594015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.594047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.594065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.599522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.599554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.599583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.605099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.605130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.605147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.609812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.609842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.609859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.614496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.614535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.614551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.619142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.619172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.619190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.293 [2024-11-19 16:41:40.624245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.293 [2024-11-19 16:41:40.624276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.293 [2024-11-19 16:41:40.624293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.628996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.629027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.629044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.634446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.634486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.634504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.639775] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.639805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.639822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.645279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.645313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.645332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.650030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.650067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.650100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.654798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.654829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.654846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.659476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.659506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.659523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.665031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.665076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.665096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.671695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.671726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.671744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.679094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.679127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.679148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.684834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.684865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.684882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.688273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.688319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.688335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.694046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.694097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.694115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.699864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.699899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.699916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.706002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.706031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.706063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.712013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.712042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.712058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.718188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.718219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.718237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.725825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.725855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.725887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.731942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.731972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.732004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.738824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.738869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.738886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.745320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.745366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.745383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.751728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.751772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.751789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.757924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.554 [2024-11-19 16:41:40.757968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.554 [2024-11-19 16:41:40.757984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.554 [2024-11-19 16:41:40.763067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.763123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.763141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.768789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.768820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.768837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.775042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.775095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.775116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.781182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.781213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.781232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.787189] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.787221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.787238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.793304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.793335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.793353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.799390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.799421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.799438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.805597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.805628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.805667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.811638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.811669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.811687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.816984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.817015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.817033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.822538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.822569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.822586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.828240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.828272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.828290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.834188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.834220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.834238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.839955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.839986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.840003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.845644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.845676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.845693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.850795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.850840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.850858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.856633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.856665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.856683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.862586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.862617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.862649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.868753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.868783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.868816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.874269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.874300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.874317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.879176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.879206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.879223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.555 [2024-11-19 16:41:40.884314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.555 [2024-11-19 16:41:40.884345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.555 [2024-11-19 16:41:40.884362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.889494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.889524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.889541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.894649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.894685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.894703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.899532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.899564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.899589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.905006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.905037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.905056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.911055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.911101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.911120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.916576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.916607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.916624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.921986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.922018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.922036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.927336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.927367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.927384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.932097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.932137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.932155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.936888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.936919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.936937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.942030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.942060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.942093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.948327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.948364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.948382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.956183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.956214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.956232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.961413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.961455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.961473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.965090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.965120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.965138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.969942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.969988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.970005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.975590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.975636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.975653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.816 [2024-11-19 16:41:40.980534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.816 [2024-11-19 16:41:40.980564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.816 [2024-11-19 16:41:40.980582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:40.985811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:40.985843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:40.985860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:40.990410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:40.990441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:40.990459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:40.995142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:40.995172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:40.995188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:40.999714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:40.999758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:40.999775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.004358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.004388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.004405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.008859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.008890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.008906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.013858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.013889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.013906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.019175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.019206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.019223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.023924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.023954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.023971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.028512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.028541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.028558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.033132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.033162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.033184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.037710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.037740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.037757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.042466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.042496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.042513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.047690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.047736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.047753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.052340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.052396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.052421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.057708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.057743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.057761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.062709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.062740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.062758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.068559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.068590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.068608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.074182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.074213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.074230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.079403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.079440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.079458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.084973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.085005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.085022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.090689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.090720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.090738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.096724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.096756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.096774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.102127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.102158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.102176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.107083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.107129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.107146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.111716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.817 [2024-11-19 16:41:41.111762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.817 [2024-11-19 16:41:41.111779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.817 [2024-11-19 16:41:41.116832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.818 [2024-11-19 16:41:41.116864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.818 [2024-11-19 16:41:41.116882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.818 [2024-11-19 16:41:41.123766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.818 [2024-11-19 16:41:41.123797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.818 [2024-11-19 16:41:41.123815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.818 [2024-11-19 16:41:41.131263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.818 [2024-11-19 16:41:41.131295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.818 [2024-11-19 16:41:41.131313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.818 [2024-11-19 16:41:41.138577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.818 [2024-11-19 16:41:41.138608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.818 [2024-11-19 16:41:41.138625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.818 [2024-11-19 16:41:41.146714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:50.818 [2024-11-19 16:41:41.146746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.818 [2024-11-19 16:41:41.146768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.078 [2024-11-19 16:41:41.154917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.078 [2024-11-19 16:41:41.154948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.078 [2024-11-19 16:41:41.154966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.078 [2024-11-19 16:41:41.160083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.078 [2024-11-19 16:41:41.160114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.078 [2024-11-19 16:41:41.160132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.078 [2024-11-19 16:41:41.165149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.078 [2024-11-19 16:41:41.165180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.078 [2024-11-19 16:41:41.165198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.078 [2024-11-19 16:41:41.171892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.078 [2024-11-19 16:41:41.171921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.078 [2024-11-19 16:41:41.171937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.078 [2024-11-19 16:41:41.178064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.178101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.178137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.183880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.183930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.183953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.189144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.189175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.189192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.194676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.194707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.194725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.200765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.200796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.200813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.206799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.206830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.206848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.079 5686.00 IOPS, 710.75 MiB/s [2024-11-19T15:41:41.418Z] [2024-11-19 16:41:41.213585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.213617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.213634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.218604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.218635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.218652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.223210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.223245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.223263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.227736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.227782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.227800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.232406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.232435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.232452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.236997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.237026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.237059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.241700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.241730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.241746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.246555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.246585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.246601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.251054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.251095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.251114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.256092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.256121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.256137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.261334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.261365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.261382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.266818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.266847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.266864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.273684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.273714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.273737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.280818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.280850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.280867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.286610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.286642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.286660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.290929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.290960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.290978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.296209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.296240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.296273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.302491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.302536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.302553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.308126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.308158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.308175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.314117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.314160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.314176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.079 [2024-11-19 16:41:41.320225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.079 [2024-11-19 16:41:41.320256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.079 [2024-11-19 16:41:41.320287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.326107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.326158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.326175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.332652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.332698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.332716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.338963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.339009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.339025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.345138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.345183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.345199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.351021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.351050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.351091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.356968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.356999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.357032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.363548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.363578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.363611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.369607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.369638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.369655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.375770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.375817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.375834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.381825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.381856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.381890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.388011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.388056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.388080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.394342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.394373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.394390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.399558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.399589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.399606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.080 [2024-11-19 16:41:41.405366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.080 [2024-11-19 16:41:41.405410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.080 [2024-11-19 16:41:41.405426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.412770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.412802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.412820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.420142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.420173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.420196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.426880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.426925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.426942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.433546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.433591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.433629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.438829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.438861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.438878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.443930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.443961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.443979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.450005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.450036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.450053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.455965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.456010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.456027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.461978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.462009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.462026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.466598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.466629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.466646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.471238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.471268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.471285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.475965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.475995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.476011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.480596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.480632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.480654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.485320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.340 [2024-11-19 16:41:41.485350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.340 [2024-11-19 16:41:41.485367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.340 [2024-11-19 16:41:41.489910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.489940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.489957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.495312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.495342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.495359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.502085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.502120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.502139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.509116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.509147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.509164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.514755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.514785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.514802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.520292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.520323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.520341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.524906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.524937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.524954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.529714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.529744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.529761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.534465] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.534495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.534512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.539066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.539102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.539118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.544448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.544478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.544495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.551318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.551348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.551365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.558539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.558571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.558589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.564631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.564662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.564679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.571493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.571524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.571541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.577843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.577875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.577898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.581949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.581986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.582004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.587704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.587746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.587762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.595264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.595294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.595312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.602435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.602466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.602482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.610716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.610759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.610774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.617164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.617195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.617213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.621613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.621658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.621675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.626418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.626447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.626463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.631341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.631380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.631397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.636406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.636450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.636466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.341 [2024-11-19 16:41:41.641258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.341 [2024-11-19 16:41:41.641288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.341 [2024-11-19 16:41:41.641306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.342 [2024-11-19 16:41:41.646092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.342 [2024-11-19 16:41:41.646121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.342 [2024-11-19 16:41:41.646153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.342 [2024-11-19 16:41:41.650882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.342 [2024-11-19 16:41:41.650911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.342 [2024-11-19 16:41:41.650943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.342 [2024-11-19 16:41:41.655618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.342 [2024-11-19 16:41:41.655662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.342 [2024-11-19 16:41:41.655678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.342 [2024-11-19 16:41:41.660279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.342 [2024-11-19 16:41:41.660309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.342 [2024-11-19 16:41:41.660326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.342 [2024-11-19 16:41:41.665133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.342 [2024-11-19 16:41:41.665163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.342 [2024-11-19 16:41:41.665180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.342 [2024-11-19 16:41:41.670878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.342 [2024-11-19 16:41:41.670909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.342 [2024-11-19 16:41:41.670926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.678517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.678549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.678567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.684573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.684605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.684623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.690376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.690408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.690440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.696230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.696262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.696280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.701664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.701695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.701713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.706134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.706164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.706181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.710603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.710633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.710651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.715131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.715162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.715178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.719608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.719638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.719662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.724202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.724233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.724251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.728599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.728630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.728647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.732793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.732824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.732841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.737311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.602 [2024-11-19 16:41:41.737341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.602 [2024-11-19 16:41:41.737358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.602 [2024-11-19 16:41:41.741983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.742013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.742030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.746854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.746885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.746903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.751822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.751853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.751870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.756361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.756391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.756408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.760955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.761007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.761024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.764572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.764604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.764621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.768988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.769017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.769033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.775802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.775832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.775864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.781349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.781396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.781413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.786745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.786778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.786814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.791598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.791645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.791662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.796731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.796763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.796779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.802387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.802431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.802447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.807341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.807388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.807405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.812093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.812123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.812140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.816682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.816712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.816744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.821648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.821695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.821712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.824986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.825016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.825049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.828756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.828799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.828816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.833307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.833338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.833354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.837865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.837910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.837926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.842364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.842409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.842431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.846805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.846834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.846865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.851829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.851859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.851889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.857018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.857048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.857091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.861558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.861586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.861615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.603 [2024-11-19 16:41:41.866261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.603 [2024-11-19 16:41:41.866291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.603 [2024-11-19 16:41:41.866308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.871494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.871523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.871555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.876373] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.876403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.876436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.882041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.882095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.882128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.889028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.889059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.889102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.895978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.896008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.896041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.902553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.902597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.902614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.908399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.908430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.908461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.914714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.914760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.914777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.920730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.920781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.920799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.926733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.926768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.926787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.604 [2024-11-19 16:41:41.934006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.604 [2024-11-19 16:41:41.934037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.604 [2024-11-19 16:41:41.934055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.864 [2024-11-19 16:41:41.941324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.864 [2024-11-19 16:41:41.941356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.864 [2024-11-19 16:41:41.941379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.864 [2024-11-19 16:41:41.949389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.864 [2024-11-19 16:41:41.949426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.864 [2024-11-19 16:41:41.949443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.864 [2024-11-19 16:41:41.957163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.864 [2024-11-19 16:41:41.957195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.864 [2024-11-19 16:41:41.957213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.864 [2024-11-19 16:41:41.964701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.864 [2024-11-19 16:41:41.964733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.864 [2024-11-19 16:41:41.964750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.864 [2024-11-19 16:41:41.970169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.864 [2024-11-19 16:41:41.970201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.864 [2024-11-19 16:41:41.970218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.864 [2024-11-19 16:41:41.974841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.864 [2024-11-19 16:41:41.974871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.864 [2024-11-19 16:41:41.974889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.864 [2024-11-19 16:41:41.979375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.864 [2024-11-19 16:41:41.979406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.864 [2024-11-19 16:41:41.979423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.864 [2024-11-19 16:41:41.984434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.864 [2024-11-19 16:41:41.984465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.864 [2024-11-19 16:41:41.984483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.864 [2024-11-19 16:41:41.989442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.864 [2024-11-19 16:41:41.989472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.864 [2024-11-19 16:41:41.989489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.864 [2024-11-19 16:41:41.994173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:41.994209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:41.994226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:41.998959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:41.998989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:41.999005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.003512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.003540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.003574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.008262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.008292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.008309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.012776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.012806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.012822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.017790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.017821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.017838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.022765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.022796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.022813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.026644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.026675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.026692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.033127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.033174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.033192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.040900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.040929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.040962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.046554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.046597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.046614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.052466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.052496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.052529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.058185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.058230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.058249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.063457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.063486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.063517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.069199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.069231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.069248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.074542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.074570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.074602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.078375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.078420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.078436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.082938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.082983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.083004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.087595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.087623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.087655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.092340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.092370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.092387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.096840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.096868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.096901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.101287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.101316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.101333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.105761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.105792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.105809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.110268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.110299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.110316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.114727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.114758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.114775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.119201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.119231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.119247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.123647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.865 [2024-11-19 16:41:42.123680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.865 [2024-11-19 16:41:42.123712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.865 [2024-11-19 16:41:42.128275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.128306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.128323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.132692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.132735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.132752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.137320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.137363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.137379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.141829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.141858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.141890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.146859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.146886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.146917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.151587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.151615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.151647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.158369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.158398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.158431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.165522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.165552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.165585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.171006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.171036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.171053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.176603] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.176634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.176650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.182054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.182097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.182114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.187093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.187124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.187141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.866 [2024-11-19 16:41:42.193321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:51.866 [2024-11-19 16:41:42.193359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.866 [2024-11-19 16:41:42.193376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.124 [2024-11-19 16:41:42.198765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:52.124 [2024-11-19 16:41:42.198797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.124 [2024-11-19 16:41:42.198815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.124 [2024-11-19 16:41:42.205746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:52.124 [2024-11-19 16:41:42.205775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.125 [2024-11-19 16:41:42.205807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.125 5696.00 IOPS, 712.00 MiB/s [2024-11-19T15:41:42.464Z] [2024-11-19 16:41:42.214363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x91f920) 00:35:52.125 [2024-11-19 16:41:42.214410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.125 [2024-11-19 16:41:42.214427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.125 00:35:52.125 Latency(us) 00:35:52.125 [2024-11-19T15:41:42.464Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.125 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:52.125 nvme0n1 : 2.01 5689.74 711.22 0.00 0.00 2807.18 673.56 12718.84 00:35:52.125 [2024-11-19T15:41:42.464Z] =================================================================================================================== 00:35:52.125 [2024-11-19T15:41:42.464Z] Total : 5689.74 711.22 0.00 0.00 2807.18 673.56 12718.84 00:35:52.125 { 00:35:52.125 "results": [ 00:35:52.125 { 00:35:52.125 "job": "nvme0n1", 00:35:52.125 "core_mask": "0x2", 00:35:52.125 "workload": "randread", 00:35:52.125 "status": "finished", 00:35:52.125 "queue_depth": 16, 00:35:52.125 "io_size": 131072, 00:35:52.125 "runtime": 2.005188, 00:35:52.125 "iops": 5689.740812332809, 00:35:52.125 "mibps": 711.2176015416011, 00:35:52.125 "io_failed": 0, 00:35:52.125 "io_timeout": 0, 00:35:52.125 "avg_latency_us": 2807.17644640521, 00:35:52.125 "min_latency_us": 673.5644444444445, 00:35:52.125 "max_latency_us": 12718.838518518518 00:35:52.125 } 00:35:52.125 ], 00:35:52.125 "core_count": 1 00:35:52.125 } 00:35:52.125 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:52.125 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:52.125 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:52.125 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:52.125 | .driver_specific 00:35:52.125 | .nvme_error 00:35:52.125 | .status_code 00:35:52.125 | .command_transient_transport_error' 00:35:52.383 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 369 > 0 )) 00:35:52.383 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 395397 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 395397 ']' 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 395397 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395397 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395397' 00:35:52.384 killing process with pid 395397 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 395397 00:35:52.384 Received shutdown signal, test time was about 2.000000 seconds 00:35:52.384 00:35:52.384 Latency(us) 00:35:52.384 [2024-11-19T15:41:42.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.384 [2024-11-19T15:41:42.723Z] =================================================================================================================== 00:35:52.384 [2024-11-19T15:41:42.723Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 395397 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=395800 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 395800 /var/tmp/bperf.sock 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 395800 ']' 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:52.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.384 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:52.643 [2024-11-19 16:41:42.749390] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:35:52.643 [2024-11-19 16:41:42.749466] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395800 ] 00:35:52.643 [2024-11-19 16:41:42.815015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.643 [2024-11-19 16:41:42.863558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.901 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.901 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:52.901 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:52.901 16:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:53.159 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:53.159 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.159 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:53.159 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.159 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:53.159 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:53.417 nvme0n1 00:35:53.417 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:53.417 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:53.417 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:53.417 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:53.417 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:53.417 16:41:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:53.676 Running I/O for 2 seconds... 00:35:53.676 [2024-11-19 16:41:43.774523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166dfdc0 00:35:53.676 [2024-11-19 16:41:43.775895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.775950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.785783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e88f8 00:35:53.676 [2024-11-19 16:41:43.786814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:19257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.786844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.797409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5ec8 00:35:53.676 [2024-11-19 16:41:43.798510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.798551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.809357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f4f40 00:35:53.676 [2024-11-19 16:41:43.810145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:22313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.810174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.821896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e99d8 00:35:53.676 [2024-11-19 16:41:43.822720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:21593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.822762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.836304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166fcdd0 00:35:53.676 [2024-11-19 16:41:43.838111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:12058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.838153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.844566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f0bc0 00:35:53.676 [2024-11-19 16:41:43.845564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.845606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.858157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f8e88 00:35:53.676 [2024-11-19 16:41:43.859494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.859522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.869352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166ed920 00:35:53.676 [2024-11-19 16:41:43.870540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.870590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.880927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166ed920 00:35:53.676 [2024-11-19 16:41:43.882191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:21568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.882234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.895298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e9e10 00:35:53.676 [2024-11-19 16:41:43.897076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:21794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.897118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.903717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166fc128 00:35:53.676 [2024-11-19 16:41:43.904498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.904538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.916243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166df550 00:35:53.676 [2024-11-19 16:41:43.917204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:15969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.917247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.931461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e4578 00:35:53.676 [2024-11-19 16:41:43.933349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.933391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.939771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e1710 00:35:53.676 [2024-11-19 16:41:43.940822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.940864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.951504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f35f0 00:35:53.676 [2024-11-19 16:41:43.952134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:15916 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.952177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.965281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f57b0 00:35:53.676 [2024-11-19 16:41:43.966625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.966653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.976990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f7970 00:35:53.676 [2024-11-19 16:41:43.978580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.978633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:43.989140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166ddc00 00:35:53.676 [2024-11-19 16:41:43.990653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.676 [2024-11-19 16:41:43.990695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:35:53.676 [2024-11-19 16:41:44.000500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166fda78 00:35:53.676 [2024-11-19 16:41:44.001952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.677 [2024-11-19 16:41:44.001979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.011551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166fe2e8 00:35:53.936 [2024-11-19 16:41:44.012546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.012576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.026677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166feb58 00:35:53.936 [2024-11-19 16:41:44.028671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:17834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.028701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.038578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f46d0 00:35:53.936 [2024-11-19 16:41:44.040466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.040494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.047016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166fb8b8 00:35:53.936 [2024-11-19 16:41:44.047947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:16836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.047989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.059228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166ee5c8 00:35:53.936 [2024-11-19 16:41:44.060186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.060228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.070607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f7538 00:35:53.936 [2024-11-19 16:41:44.071490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:12272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.071536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.084576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f46d0 00:35:53.936 [2024-11-19 16:41:44.085650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:25312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.085680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.095823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166ee190 00:35:53.936 [2024-11-19 16:41:44.096689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:12371 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.096723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.107386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f7538 00:35:53.936 [2024-11-19 16:41:44.108508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.108538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.119202] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166f7100 00:35:53.936 [2024-11-19 16:41:44.120329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.120356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.131321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166fa7d8 00:35:53.936 [2024-11-19 16:41:44.132500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.132546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.142575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166fc998 00:35:53.936 [2024-11-19 16:41:44.143593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.143635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.154081] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166ed0b0 00:35:53.936 [2024-11-19 16:41:44.155066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.155114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.166294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e12d8 00:35:53.936 [2024-11-19 16:41:44.167309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:20785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.936 [2024-11-19 16:41:44.167352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:53.936 [2024-11-19 16:41:44.177587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166ed4e8 00:35:53.937 [2024-11-19 16:41:44.178500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.937 [2024-11-19 16:41:44.178547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:53.937 [2024-11-19 16:41:44.191677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e3060 00:35:53.937 [2024-11-19 16:41:44.193169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.937 [2024-11-19 16:41:44.193199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:53.937 [2024-11-19 16:41:44.202681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166de8a8 00:35:53.937 [2024-11-19 16:41:44.203846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:3189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.937 [2024-11-19 16:41:44.203875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:53.937 [2024-11-19 16:41:44.214273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e9e10 00:35:53.937 [2024-11-19 16:41:44.215510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21919 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.937 [2024-11-19 16:41:44.215553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:53.937 [2024-11-19 16:41:44.228704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166fef90 00:35:53.937 [2024-11-19 16:41:44.230557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.937 [2024-11-19 16:41:44.230600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:53.937 [2024-11-19 16:41:44.239544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:53.937 [2024-11-19 16:41:44.239792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.937 [2024-11-19 16:41:44.239836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.937 [2024-11-19 16:41:44.253193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:53.937 [2024-11-19 16:41:44.253534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.937 [2024-11-19 16:41:44.253577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:53.937 [2024-11-19 16:41:44.267390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:53.937 [2024-11-19 16:41:44.267642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:53.937 [2024-11-19 16:41:44.267671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.281232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.281492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.281521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.294900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.295179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:5487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.295208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.308812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.309048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.309087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.322942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.323201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.323230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.337039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.337269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.337301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.351065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.351328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.351356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.365315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.365571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.365598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.378792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.379047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.379083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.392257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.392527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:24662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.392569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.406313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.406682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.406710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.420468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.420755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.420799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.434482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.434773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:20826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.434801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.448047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.448310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.448340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.461598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.461818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.461850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.475626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.475899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.475944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.196 [2024-11-19 16:41:44.489831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.196 [2024-11-19 16:41:44.490136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:2345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.196 [2024-11-19 16:41:44.490164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.197 [2024-11-19 16:41:44.504054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.197 [2024-11-19 16:41:44.504374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:4046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.197 [2024-11-19 16:41:44.504403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.197 [2024-11-19 16:41:44.517991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.197 [2024-11-19 16:41:44.518230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.197 [2024-11-19 16:41:44.518262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.456 [2024-11-19 16:41:44.531847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.532084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.532126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.545602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.545931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.545959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.559552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.559939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:17628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.559968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.573869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.574095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:9774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.574123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.588065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.588381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:8885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.588410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.602332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.602592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.602637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.616584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.616944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.616972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.630875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.631176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.631205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.645018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.645279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.645311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.658720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.659009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.659059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.672923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.673224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:19455 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.673252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.687206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.687494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:15636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.687536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.701487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.701748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:10788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.701778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.715654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.715940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.715983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.729993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.730372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.730400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.744265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.744594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.744636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.758015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 19586.00 IOPS, 76.51 MiB/s [2024-11-19T15:41:44.796Z] [2024-11-19 16:41:44.758646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:13783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.758702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.772206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.772515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.772557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.457 [2024-11-19 16:41:44.786424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.457 [2024-11-19 16:41:44.786679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.457 [2024-11-19 16:41:44.786707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.716 [2024-11-19 16:41:44.799698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.716 [2024-11-19 16:41:44.800004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.716 [2024-11-19 16:41:44.800031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.716 [2024-11-19 16:41:44.813602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.716 [2024-11-19 16:41:44.813877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.716 [2024-11-19 16:41:44.813920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.716 [2024-11-19 16:41:44.827773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.716 [2024-11-19 16:41:44.828028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.828056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.842046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.842304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.842332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.856148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.856421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.856465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.870252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.870509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.870536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.884360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.884621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.884666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.898627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.898884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.898919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.912884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.913142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.913173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.926964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.927267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.927296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.941155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.941419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.941462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.955268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.955570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.955599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.969389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.969687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.969729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.983575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.983911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.983954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:44.997688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:44.998118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:44.998146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:45.011838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:45.012141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:45.012173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:45.026005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:45.026329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:45.026357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.717 [2024-11-19 16:41:45.040219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.717 [2024-11-19 16:41:45.040541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:6057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.717 [2024-11-19 16:41:45.040569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.976 [2024-11-19 16:41:45.053745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.054016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18502 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.054044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.067649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.067964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.067992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.081776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.082108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.082137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.095867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.096151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.096182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.109989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.110310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.110341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.124326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.124687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.124715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.138450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.138814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.138843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.152791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.153101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5456 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.153130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.166758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.167166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.167196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.181066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.181388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21364 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.181431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.195255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.195565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.195607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.209448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.209763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:2113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.209805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.223560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.223872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16073 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.223899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.237858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.238165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16388 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.238193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.251907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.252146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.252178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.265857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.266157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.266193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.279775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.280054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.280105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.294191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.294505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.294533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:54.977 [2024-11-19 16:41:45.308039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:54.977 [2024-11-19 16:41:45.308278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:54.977 [2024-11-19 16:41:45.308306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.321883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.322210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.322239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.336200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.336465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:22913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.336509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.350267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.350596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.350639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.364623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.365003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:11556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.365031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.378819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.379169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8798 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.379197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.392921] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.393267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19842 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.393295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.407185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.407447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.407494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.421493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.421802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.421830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.435639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.435924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:10225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.435967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.449805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.450130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:14946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.450160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.463991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.464331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20495 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.464360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.478379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.478728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.478756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.492421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.492758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.492801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.506699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.507044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.507092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.520832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.521094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.521123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.534380] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.534715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.534743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.548089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.548351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.548379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.237 [2024-11-19 16:41:45.562040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.237 [2024-11-19 16:41:45.562319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.237 [2024-11-19 16:41:45.562349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.576055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.576457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.576484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.590058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.590371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.590413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.604198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.604548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.604577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.618309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.618670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.618714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.632220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.632592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.632640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.646104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.646409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.646452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.659916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.660240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:21962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.660283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.673725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.674032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:25296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.674081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.687704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.687973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:8332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.688016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.701479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.701788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.701831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.715278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.715613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.715656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.729086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.729388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:9639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.729431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.742918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.743238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:19728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.743270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 [2024-11-19 16:41:45.756771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c460) with pdu=0x2000166e5220 00:35:55.496 [2024-11-19 16:41:45.757120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:55.496 [2024-11-19 16:41:45.757149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:35:55.496 18881.00 IOPS, 73.75 MiB/s 00:35:55.496 Latency(us) 00:35:55.496 [2024-11-19T15:41:45.835Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.496 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:55.496 nvme0n1 : 2.01 18884.40 73.77 0.00 0.00 6763.25 2669.99 16117.00 00:35:55.496 [2024-11-19T15:41:45.835Z] =================================================================================================================== 00:35:55.496 [2024-11-19T15:41:45.835Z] Total : 18884.40 73.77 0.00 0.00 6763.25 2669.99 16117.00 00:35:55.496 { 00:35:55.496 "results": [ 00:35:55.496 { 00:35:55.496 "job": "nvme0n1", 00:35:55.496 "core_mask": "0x2", 00:35:55.496 "workload": "randwrite", 00:35:55.496 "status": "finished", 00:35:55.496 "queue_depth": 128, 00:35:55.496 "io_size": 4096, 00:35:55.496 "runtime": 2.006418, 00:35:55.496 "iops": 18884.39996052667, 00:35:55.496 "mibps": 73.7671873458073, 00:35:55.496 "io_failed": 0, 00:35:55.496 "io_timeout": 0, 00:35:55.496 "avg_latency_us": 6763.246911312474, 00:35:55.496 "min_latency_us": 2669.9851851851854, 00:35:55.496 "max_latency_us": 16117.001481481482 00:35:55.496 } 00:35:55.496 ], 00:35:55.496 "core_count": 1 00:35:55.496 } 00:35:55.496 16:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:55.496 16:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:55.496 16:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:55.496 | .driver_specific 00:35:55.496 | .nvme_error 00:35:55.496 | .status_code 00:35:55.496 | .command_transient_transport_error' 00:35:55.496 16:41:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:55.754 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:35:55.754 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 395800 00:35:55.754 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 395800 ']' 00:35:55.754 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 395800 00:35:55.754 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:55.754 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:55.754 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 395800 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 395800' 00:35:56.011 killing process with pid 395800 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 395800 00:35:56.011 Received shutdown signal, test time was about 2.000000 seconds 00:35:56.011 00:35:56.011 Latency(us) 00:35:56.011 [2024-11-19T15:41:46.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.011 [2024-11-19T15:41:46.350Z] =================================================================================================================== 00:35:56.011 [2024-11-19T15:41:46.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 395800 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=396212 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 396212 /var/tmp/bperf.sock 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 396212 ']' 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:56.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:56.011 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:56.011 [2024-11-19 16:41:46.340624] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:35:56.011 [2024-11-19 16:41:46.340698] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid396212 ] 00:35:56.011 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:56.011 Zero copy mechanism will not be used. 00:35:56.269 [2024-11-19 16:41:46.409914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:56.269 [2024-11-19 16:41:46.455267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:56.269 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:56.269 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:56.269 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:56.269 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:56.527 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:56.527 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:56.527 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:56.527 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:56.527 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:56.527 16:41:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:57.093 nvme0n1 00:35:57.093 16:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:57.093 16:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.093 16:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:57.093 16:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.093 16:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:57.093 16:41:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:57.352 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:57.352 Zero copy mechanism will not be used. 00:35:57.352 Running I/O for 2 seconds... 00:35:57.352 [2024-11-19 16:41:47.452822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.352 [2024-11-19 16:41:47.452924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.352 [2024-11-19 16:41:47.452970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.352 [2024-11-19 16:41:47.459241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.352 [2024-11-19 16:41:47.459319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.352 [2024-11-19 16:41:47.459355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.352 [2024-11-19 16:41:47.464906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.352 [2024-11-19 16:41:47.465000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.352 [2024-11-19 16:41:47.465029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.352 [2024-11-19 16:41:47.470808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.352 [2024-11-19 16:41:47.470882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.470916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.476618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.476709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.476741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.482478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.482570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.482612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.488192] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.488264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.488295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.493824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.493902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.493933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.499945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.500017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.500078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.505772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.505846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.505875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.511532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.511606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.511637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.517062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.517176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.517205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.523462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.523552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.523589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.530230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.530429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.530462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.537601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.537732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.537766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.545155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.545299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.545344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.551987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.552106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.552136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.558696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.558807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.558836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.565991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.566146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.566175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.572635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.572709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.572737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.578194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.578283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.578311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.583303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.583448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.583477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.588198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.588295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.588327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.593272] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.593440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.593473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.599796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.599925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.599955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.605125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.605267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.605296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.610174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.610322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.610351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.615249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.615372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.615409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.620345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.620450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.620480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.353 [2024-11-19 16:41:47.625785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.353 [2024-11-19 16:41:47.625869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.353 [2024-11-19 16:41:47.625897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.354 [2024-11-19 16:41:47.630761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.354 [2024-11-19 16:41:47.630833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.354 [2024-11-19 16:41:47.630861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.354 [2024-11-19 16:41:47.636333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.354 [2024-11-19 16:41:47.636487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.354 [2024-11-19 16:41:47.636516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.354 [2024-11-19 16:41:47.642528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.354 [2024-11-19 16:41:47.642655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.354 [2024-11-19 16:41:47.642685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.354 [2024-11-19 16:41:47.648909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.354 [2024-11-19 16:41:47.649024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.354 [2024-11-19 16:41:47.649061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.354 [2024-11-19 16:41:47.655945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.354 [2024-11-19 16:41:47.656124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.354 [2024-11-19 16:41:47.656162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.354 [2024-11-19 16:41:47.662350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.354 [2024-11-19 16:41:47.662421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.354 [2024-11-19 16:41:47.662449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.354 [2024-11-19 16:41:47.667961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.354 [2024-11-19 16:41:47.668037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.354 [2024-11-19 16:41:47.668065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.354 [2024-11-19 16:41:47.673601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.354 [2024-11-19 16:41:47.673695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.354 [2024-11-19 16:41:47.673727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.354 [2024-11-19 16:41:47.678725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.354 [2024-11-19 16:41:47.678796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.354 [2024-11-19 16:41:47.678824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.354 [2024-11-19 16:41:47.684046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.354 [2024-11-19 16:41:47.684179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.354 [2024-11-19 16:41:47.684208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.689256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.689331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.689359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.694739] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.694810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.694843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.700282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.700360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.700388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.705373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.705460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.705487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.710238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.710307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.710335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.715281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.715363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.715392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.720338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.720444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.720473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.725537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.725637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.725666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.730708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.730795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.730822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.735824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.735918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.735946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.740838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.740914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.740941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.746027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.746117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.746145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.751048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.751140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.751168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.756173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.756256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.756283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.762042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.762126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.762154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.768067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.768149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.614 [2024-11-19 16:41:47.768177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.614 [2024-11-19 16:41:47.774051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.614 [2024-11-19 16:41:47.774134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.774161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.779696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.779769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.779796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.784720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.784795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.784823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.790512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.790584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.790611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.795849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.795928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.795956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.800802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.800879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.800907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.805701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.805776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.805807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.810742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.810820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.810849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.816021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.816096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.816125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.820999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.821107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.821135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.826066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.826156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.826188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.831041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.831141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.831176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.836261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.836340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.836373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.841267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.841369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.841399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.846405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.846483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.846510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.851965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.852036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.852064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.857311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.857386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.857413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.862520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.862608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.862634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.867684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.867776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.867808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.872925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.872996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.873024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.878549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.878625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.878663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.884209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.884293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.884322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.889901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.889991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.890019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.894901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.894998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.895027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.899880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.899990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.900019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.905439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.905596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.905624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.911852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.911972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.615 [2024-11-19 16:41:47.912002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.615 [2024-11-19 16:41:47.918124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.615 [2024-11-19 16:41:47.918282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.616 [2024-11-19 16:41:47.918311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.616 [2024-11-19 16:41:47.923844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.616 [2024-11-19 16:41:47.923972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.616 [2024-11-19 16:41:47.924002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.616 [2024-11-19 16:41:47.930497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.616 [2024-11-19 16:41:47.930720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.616 [2024-11-19 16:41:47.930749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.616 [2024-11-19 16:41:47.937119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.616 [2024-11-19 16:41:47.937250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.616 [2024-11-19 16:41:47.937278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.616 [2024-11-19 16:41:47.942500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.616 [2024-11-19 16:41:47.942594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.616 [2024-11-19 16:41:47.942621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.616 [2024-11-19 16:41:47.947944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.616 [2024-11-19 16:41:47.948062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.616 [2024-11-19 16:41:47.948101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:47.953275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:47.953361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:47.953389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:47.958691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:47.958760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:47.958787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:47.964390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:47.964471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:47.964498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:47.969360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:47.969444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:47.969471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:47.974373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:47.974457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:47.974484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:47.979435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:47.979556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:47.979584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:47.984910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:47.985017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:47.985046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:47.989944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:47.990097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:47.990126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:47.994894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:47.995000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:47.995029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:47.999828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:47.999976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.000005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.004933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.005078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.005118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.010723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.010806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.010834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.017664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.017771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.017800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.023553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.023651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.023689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.029135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.029245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.029274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.034463] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.034535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.034562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.040750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.040841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.040874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.046352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.046431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.046458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.051400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.051480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.051507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.056593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.056667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.056694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.061647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.061727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.061754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.066619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.066693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.066720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.071647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.071730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.071757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.076663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.076739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.076766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.081839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.081922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.081949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.086862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.086940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.086967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.091811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.091892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.091919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.096777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.096854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.096881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.101846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.101919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.101947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.106855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.106947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.106976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.111886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.111972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.112000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.116951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.117057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.875 [2024-11-19 16:41:48.117094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.875 [2024-11-19 16:41:48.121630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.875 [2024-11-19 16:41:48.121943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.121976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.127110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.127400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.127430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.132448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.132765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.132795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.137237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.137573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.137601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.141794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.142119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.142153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.146244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.146534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.146563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.150605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.150838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.150866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.154812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.155039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.155086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.159341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.159582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.159611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.163617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.163811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.163845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.167963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.168158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.168186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.172504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.172699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.172728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.177096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.177271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.177299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.181773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.181961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.181989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.186354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.186571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.186599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.190845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.191045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.191080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.195470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.195656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.195684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.200674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.200861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.200889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:57.876 [2024-11-19 16:41:48.205787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:57.876 [2024-11-19 16:41:48.206013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:57.876 [2024-11-19 16:41:48.206042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.211797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.212027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.212057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.216952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.217224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.217259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.222155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.222446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.222475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.226938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.227134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.227167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.231902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.232148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.232177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.237167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.237492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.237521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.242306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.242557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.242586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.247552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.247791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.247823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.252947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.253252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.253285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.258122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.258457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.258486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.263417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.263629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.263659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.268593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.268863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.268892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.273784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.274075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.274109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.279030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.279272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.279301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.284221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.284478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.284513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.289448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.289664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.289694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.294549] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.135 [2024-11-19 16:41:48.294751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.135 [2024-11-19 16:41:48.294780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.135 [2024-11-19 16:41:48.299837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.300110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.300139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.305118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.305438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.305467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.310201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.310511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.310546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.315224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.315445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.315474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.320429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.320683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.320712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.325635] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.325886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.325915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.330844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.331080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.331109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.336057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.336263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.336292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.341330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.341620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.341649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.346660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.346971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.347000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.351949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.352191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.352220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.357270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.357467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.357496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.362378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.362560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.362588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.367702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.367891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.367919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.372871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.373113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.373142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.378067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.378339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.378368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.383278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.383501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.383531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.388428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.388734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.388762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.393626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.393843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.393871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.398743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.399046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.399082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.404002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.404249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.404278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.409304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.409560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.409589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.414538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.414762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.414791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.419761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.419966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.420000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.425078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.425292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.425321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.430298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.430502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.430538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.435500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.136 [2024-11-19 16:41:48.435699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.136 [2024-11-19 16:41:48.435728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.136 [2024-11-19 16:41:48.440722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.137 [2024-11-19 16:41:48.440923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.137 [2024-11-19 16:41:48.440952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.137 [2024-11-19 16:41:48.446030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.137 [2024-11-19 16:41:48.446270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.137 [2024-11-19 16:41:48.446300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.137 [2024-11-19 16:41:48.451119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.137 [2024-11-19 16:41:48.451349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.137 [2024-11-19 16:41:48.451378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.137 5767.00 IOPS, 720.88 MiB/s [2024-11-19T15:41:48.476Z] [2024-11-19 16:41:48.457377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.137 [2024-11-19 16:41:48.457565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.137 [2024-11-19 16:41:48.457593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.137 [2024-11-19 16:41:48.462451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.137 [2024-11-19 16:41:48.462622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.137 [2024-11-19 16:41:48.462651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.137 [2024-11-19 16:41:48.467124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.137 [2024-11-19 16:41:48.467260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.137 [2024-11-19 16:41:48.467289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.396 [2024-11-19 16:41:48.471901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.396 [2024-11-19 16:41:48.472057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.396 [2024-11-19 16:41:48.472104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.396 [2024-11-19 16:41:48.477361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.396 [2024-11-19 16:41:48.477445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.396 [2024-11-19 16:41:48.477472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.396 [2024-11-19 16:41:48.482498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.396 [2024-11-19 16:41:48.482742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.396 [2024-11-19 16:41:48.482771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.396 [2024-11-19 16:41:48.487313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.396 [2024-11-19 16:41:48.487452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.396 [2024-11-19 16:41:48.487480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.396 [2024-11-19 16:41:48.491753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.396 [2024-11-19 16:41:48.492000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.396 [2024-11-19 16:41:48.492029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.396 [2024-11-19 16:41:48.496826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.396 [2024-11-19 16:41:48.497020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.396 [2024-11-19 16:41:48.497049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.396 [2024-11-19 16:41:48.502241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.396 [2024-11-19 16:41:48.502418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.396 [2024-11-19 16:41:48.502447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.396 [2024-11-19 16:41:48.507922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.396 [2024-11-19 16:41:48.508090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.396 [2024-11-19 16:41:48.508119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.512802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.513083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.513118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.517996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.518200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.518229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.523203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.523432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.523462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.528337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.528554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.528589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.533539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.533764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.533793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.538683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.538848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.538878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.543987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.544222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.544251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.549851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.550060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.550098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.555445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.555609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.555643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.560301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.560437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.560465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.564763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.564915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.564943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.568960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.569135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.569163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.573136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.573294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.573322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.577350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.577503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.577532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.581558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.581727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.581762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.585821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.585976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.586004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.590006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.590183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.590211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.594233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.594397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.594425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.598413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.598579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.598608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.602611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.602777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.602806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.606857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.607007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.607041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.611127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.611296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.611325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.615361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.615499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.615527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.619618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.619763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.619791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.623781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.623945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.623973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.628007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.397 [2024-11-19 16:41:48.628211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.397 [2024-11-19 16:41:48.628239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.397 [2024-11-19 16:41:48.632246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.632408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.632436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.636467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.636636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.636664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.640716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.640896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.640924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.644966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.645159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.645188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.649168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.649318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.649346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.653377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.653547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.653577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.657527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.657688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.657717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.661787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.661923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.661951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.665997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.666175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.666211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.670199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.670358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.670386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.674394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.674556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.674584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.678559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.678728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.678757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.682799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.682954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.682982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.686967] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.687146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.687174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.691157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.691281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.691308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.695337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.695476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.695503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.699509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.699669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.699704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.703719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.703878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.703912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.707890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.708051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.708086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.712150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.712268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.712296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.716338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.716496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.716523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.720576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.720736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.720765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.724773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.724948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.724983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.398 [2024-11-19 16:41:48.728982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.398 [2024-11-19 16:41:48.729140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.398 [2024-11-19 16:41:48.729169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.733174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.733341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.733369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.737359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.737497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.737526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.741529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.741637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.741665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.745673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.745849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.745883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.749900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.750064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.750103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.754066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.754247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.754275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.758216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.758355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.758383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.762378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.762540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.762568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.766678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.766815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.766843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.771812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.771980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.772009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.776885] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.777133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.777162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.782362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.782614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.782644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.787597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.787845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.787874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.792701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.792940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.792969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.658 [2024-11-19 16:41:48.797757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.658 [2024-11-19 16:41:48.798006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.658 [2024-11-19 16:41:48.798035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.802993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.803233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.803261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.808029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.808308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.808344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.813012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.813257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.813286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.818109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.818330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.818359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.823290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.823471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.823508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.828405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.828593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.828621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.833585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.833827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.833855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.838658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.838913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.838942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.843883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.844116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.844145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.848945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.849236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.849266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.853971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.854143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.854172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.859053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.859280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.859309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.864198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.864362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.864391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.869414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.869655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.869684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.874499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.874661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.874689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.879596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.879775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.879815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.884766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.885006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.885035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.889821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.890033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.890061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.894913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.895110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.895138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.900008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.900256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.900285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.905085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.905256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.905286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.910310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.910540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.910568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.915348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.915623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.915652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.920462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.920699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.920728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.925525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.925785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.925814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.659 [2024-11-19 16:41:48.930618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.659 [2024-11-19 16:41:48.930855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.659 [2024-11-19 16:41:48.930884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.935804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.935989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.936018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.940932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.941152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.941181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.946131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.946349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.946384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.951143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.951362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.951391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.956300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.956504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.956539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.961474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.961732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.961760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.966558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.966823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.966853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.971659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.971878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.971907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.976725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.976961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.976989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.981920] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.982178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.982207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.987112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.660 [2024-11-19 16:41:48.987392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.660 [2024-11-19 16:41:48.987429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.660 [2024-11-19 16:41:48.992209] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:48.992458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:48.992486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:48.997437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:48.997670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:48.997699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.002520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.002752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.002781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.007591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.007828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.007857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.012700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.012888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.012916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.017857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.018124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.018153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.022963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.023129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.023157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.028125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.028407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.028442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.033262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.033482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.033510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.038340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.038562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.038590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.043384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.043636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.043671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.048674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.048891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.048921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.053817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.054105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.054134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.058903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.059181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.059211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.064110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.064394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.064423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.069266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.069591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.069620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.074387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.074669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.074700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.079484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.079732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.079761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.084568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.084826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.084855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.089642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.089909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.089948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.094836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.095127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.095157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.099909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.100207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.100237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.105108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.105396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.105425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.110232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.110484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.110519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.115290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.115567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.115596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.120358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.120644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.920 [2024-11-19 16:41:49.120673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.920 [2024-11-19 16:41:49.125455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.920 [2024-11-19 16:41:49.125728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.125766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.130440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.130684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.130713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.135571] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.135876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.135906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.140557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.140845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.140874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.145632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.145910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.145940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.150704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.150995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.151029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.155947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.156215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.156244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.160996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.161291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.161321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.166119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.166268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.166296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.171173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.171341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.171369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.176259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.176412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.176440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.181429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.181630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.181659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.186561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.186692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.186721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.191626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.191788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.191820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.196780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.196997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.197026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.201923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.202078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.202107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.207014] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.207148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.207176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.212172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.212367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.212395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.217340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.217463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.217490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.222582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.222784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.222827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.227786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.227975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.228004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.232755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.232949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.232977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.237953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.238142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.238172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.243035] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.243237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.243272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.248089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.248301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.248330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:58.921 [2024-11-19 16:41:49.253324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:58.921 [2024-11-19 16:41:49.253521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:58.921 [2024-11-19 16:41:49.253549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.258413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.258592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.258621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.263503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.263668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.263697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.268702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.268883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.268926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.273828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.273961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.273989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.278957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.279165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.279194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.284030] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.284255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.284284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.289055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.289209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.289237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.294186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.294312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.294341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.299256] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.299487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.299515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.304293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.304439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.304468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.309374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.309538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.309567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.314539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.314724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.314753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.319562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.319704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.319739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.324621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.324761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.324789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.329658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.329844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.329874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.334913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.335067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.335123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.339939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.340135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.340165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.345079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.345217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.345245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.350147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.350296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.350325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.355287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.355392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.355429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.360403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.360588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.360617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.365526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.365670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.365698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.182 [2024-11-19 16:41:49.370573] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.182 [2024-11-19 16:41:49.370751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.182 [2024-11-19 16:41:49.370780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.375645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.375816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.375845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.380735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.380921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.380950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.385827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.385994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.386022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.391038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.391211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.391240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.396102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.396268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.396296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.401219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.401358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.401387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.406281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.406448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.406477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.411423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.411638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.411667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.416572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.416731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.416766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.421653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.421813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.421842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.426748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.426886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.426914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.431906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.432107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.432136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.437043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.437299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.437329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.442231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.442375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.442403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.447423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.447589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.447617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:59.183 [2024-11-19 16:41:49.452513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd1c7a0) with pdu=0x2000166ff3c8 00:35:59.183 [2024-11-19 16:41:49.452674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:59.183 [2024-11-19 16:41:49.452702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:59.183 6048.50 IOPS, 756.06 MiB/s 00:35:59.183 Latency(us) 00:35:59.183 [2024-11-19T15:41:49.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.183 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:59.183 nvme0n1 : 2.00 6045.55 755.69 0.00 0.00 2639.43 1941.81 7767.23 00:35:59.183 [2024-11-19T15:41:49.522Z] =================================================================================================================== 00:35:59.183 [2024-11-19T15:41:49.522Z] Total : 6045.55 755.69 0.00 0.00 2639.43 1941.81 7767.23 00:35:59.183 { 00:35:59.183 "results": [ 00:35:59.183 { 00:35:59.183 "job": "nvme0n1", 00:35:59.183 "core_mask": "0x2", 00:35:59.183 "workload": "randwrite", 00:35:59.183 "status": "finished", 00:35:59.183 "queue_depth": 16, 00:35:59.183 "io_size": 131072, 00:35:59.183 "runtime": 2.003623, 00:35:59.183 "iops": 6045.548488912335, 00:35:59.183 "mibps": 755.6935611140419, 00:35:59.183 "io_failed": 0, 00:35:59.183 "io_timeout": 0, 00:35:59.183 "avg_latency_us": 2639.4315357543624, 00:35:59.183 "min_latency_us": 1941.8074074074075, 00:35:59.183 "max_latency_us": 7767.22962962963 00:35:59.183 } 00:35:59.183 ], 00:35:59.183 "core_count": 1 00:35:59.183 } 00:35:59.183 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:59.183 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:59.183 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:59.183 | .driver_specific 00:35:59.183 | .nvme_error 00:35:59.183 | .status_code 00:35:59.183 | .command_transient_transport_error' 00:35:59.183 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:59.442 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 391 > 0 )) 00:35:59.442 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 396212 00:35:59.442 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 396212 ']' 00:35:59.442 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 396212 00:35:59.442 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:59.442 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.442 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396212 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396212' 00:35:59.701 killing process with pid 396212 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 396212 00:35:59.701 Received shutdown signal, test time was about 2.000000 seconds 00:35:59.701 00:35:59.701 Latency(us) 00:35:59.701 [2024-11-19T15:41:50.040Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:59.701 [2024-11-19T15:41:50.040Z] =================================================================================================================== 00:35:59.701 [2024-11-19T15:41:50.040Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 396212 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 394960 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 394960 ']' 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 394960 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:59.701 16:41:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 394960 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 394960' 00:35:59.961 killing process with pid 394960 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 394960 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 394960 00:35:59.961 00:35:59.961 real 0m15.201s 00:35:59.961 user 0m30.369s 00:35:59.961 sys 0m4.281s 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.961 ************************************ 00:35:59.961 END TEST nvmf_digest_error 00:35:59.961 ************************************ 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:59.961 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:59.961 rmmod nvme_tcp 00:35:59.961 rmmod nvme_fabrics 00:36:00.223 rmmod nvme_keyring 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 394960 ']' 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 394960 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 394960 ']' 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 394960 00:36:00.223 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (394960) - No such process 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 394960 is not found' 00:36:00.223 Process with pid 394960 is not found 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:00.223 16:41:50 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:02.138 00:36:02.138 real 0m35.290s 00:36:02.138 user 1m2.098s 00:36:02.138 sys 0m10.347s 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:02.138 ************************************ 00:36:02.138 END TEST nvmf_digest 00:36:02.138 ************************************ 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:02.138 ************************************ 00:36:02.138 START TEST nvmf_bdevperf 00:36:02.138 ************************************ 00:36:02.138 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:02.398 * Looking for test storage... 00:36:02.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:02.398 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:02.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.399 --rc genhtml_branch_coverage=1 00:36:02.399 --rc genhtml_function_coverage=1 00:36:02.399 --rc genhtml_legend=1 00:36:02.399 --rc geninfo_all_blocks=1 00:36:02.399 --rc geninfo_unexecuted_blocks=1 00:36:02.399 00:36:02.399 ' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:02.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.399 --rc genhtml_branch_coverage=1 00:36:02.399 --rc genhtml_function_coverage=1 00:36:02.399 --rc genhtml_legend=1 00:36:02.399 --rc geninfo_all_blocks=1 00:36:02.399 --rc geninfo_unexecuted_blocks=1 00:36:02.399 00:36:02.399 ' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:02.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.399 --rc genhtml_branch_coverage=1 00:36:02.399 --rc genhtml_function_coverage=1 00:36:02.399 --rc genhtml_legend=1 00:36:02.399 --rc geninfo_all_blocks=1 00:36:02.399 --rc geninfo_unexecuted_blocks=1 00:36:02.399 00:36:02.399 ' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:02.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:02.399 --rc genhtml_branch_coverage=1 00:36:02.399 --rc genhtml_function_coverage=1 00:36:02.399 --rc genhtml_legend=1 00:36:02.399 --rc geninfo_all_blocks=1 00:36:02.399 --rc geninfo_unexecuted_blocks=1 00:36:02.399 00:36:02.399 ' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:02.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:02.399 16:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:04.940 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:04.940 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:04.940 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:04.940 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:04.940 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:04.941 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:04.941 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:36:04.941 00:36:04.941 --- 10.0.0.2 ping statistics --- 00:36:04.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.941 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:04.941 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:04.941 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:36:04.941 00:36:04.941 --- 10.0.0.1 ping statistics --- 00:36:04.941 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:04.941 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=398681 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 398681 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 398681 ']' 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:04.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:04.941 16:41:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.941 [2024-11-19 16:41:54.883877] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:36:04.941 [2024-11-19 16:41:54.883963] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:04.941 [2024-11-19 16:41:54.954740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:04.941 [2024-11-19 16:41:54.997688] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:04.941 [2024-11-19 16:41:54.997746] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:04.941 [2024-11-19 16:41:54.997774] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:04.941 [2024-11-19 16:41:54.997785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:04.941 [2024-11-19 16:41:54.997795] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:04.941 [2024-11-19 16:41:54.999285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:04.941 [2024-11-19 16:41:54.999402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:04.941 [2024-11-19 16:41:54.999405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.941 [2024-11-19 16:41:55.140599] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.941 Malloc0 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:04.941 [2024-11-19 16:41:55.199123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:04.941 { 00:36:04.941 "params": { 00:36:04.941 "name": "Nvme$subsystem", 00:36:04.941 "trtype": "$TEST_TRANSPORT", 00:36:04.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:04.941 "adrfam": "ipv4", 00:36:04.941 "trsvcid": "$NVMF_PORT", 00:36:04.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:04.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:04.941 "hdgst": ${hdgst:-false}, 00:36:04.941 "ddgst": ${ddgst:-false} 00:36:04.941 }, 00:36:04.941 "method": "bdev_nvme_attach_controller" 00:36:04.941 } 00:36:04.941 EOF 00:36:04.941 )") 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:04.941 16:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:04.941 "params": { 00:36:04.941 "name": "Nvme1", 00:36:04.941 "trtype": "tcp", 00:36:04.941 "traddr": "10.0.0.2", 00:36:04.941 "adrfam": "ipv4", 00:36:04.941 "trsvcid": "4420", 00:36:04.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:04.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:04.941 "hdgst": false, 00:36:04.941 "ddgst": false 00:36:04.941 }, 00:36:04.941 "method": "bdev_nvme_attach_controller" 00:36:04.941 }' 00:36:04.941 [2024-11-19 16:41:55.249706] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:36:04.942 [2024-11-19 16:41:55.249784] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398711 ] 00:36:05.200 [2024-11-19 16:41:55.317866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:05.200 [2024-11-19 16:41:55.364439] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:05.458 Running I/O for 1 seconds... 00:36:06.397 8490.00 IOPS, 33.16 MiB/s 00:36:06.397 Latency(us) 00:36:06.397 [2024-11-19T15:41:56.736Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:06.397 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:06.397 Verification LBA range: start 0x0 length 0x4000 00:36:06.397 Nvme1n1 : 1.01 8587.33 33.54 0.00 0.00 14828.41 1650.54 15243.19 00:36:06.397 [2024-11-19T15:41:56.736Z] =================================================================================================================== 00:36:06.397 [2024-11-19T15:41:56.736Z] Total : 8587.33 33.54 0.00 0.00 14828.41 1650.54 15243.19 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=398965 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:06.679 { 00:36:06.679 "params": { 00:36:06.679 "name": "Nvme$subsystem", 00:36:06.679 "trtype": "$TEST_TRANSPORT", 00:36:06.679 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:06.679 "adrfam": "ipv4", 00:36:06.679 "trsvcid": "$NVMF_PORT", 00:36:06.679 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:06.679 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:06.679 "hdgst": ${hdgst:-false}, 00:36:06.679 "ddgst": ${ddgst:-false} 00:36:06.679 }, 00:36:06.679 "method": "bdev_nvme_attach_controller" 00:36:06.679 } 00:36:06.679 EOF 00:36:06.679 )") 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:06.679 16:41:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:06.679 "params": { 00:36:06.679 "name": "Nvme1", 00:36:06.679 "trtype": "tcp", 00:36:06.679 "traddr": "10.0.0.2", 00:36:06.679 "adrfam": "ipv4", 00:36:06.679 "trsvcid": "4420", 00:36:06.679 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:06.679 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:06.679 "hdgst": false, 00:36:06.679 "ddgst": false 00:36:06.679 }, 00:36:06.679 "method": "bdev_nvme_attach_controller" 00:36:06.679 }' 00:36:06.679 [2024-11-19 16:41:56.834191] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:36:06.679 [2024-11-19 16:41:56.834266] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid398965 ] 00:36:06.679 [2024-11-19 16:41:56.901457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.679 [2024-11-19 16:41:56.947885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.960 Running I/O for 15 seconds... 00:36:08.878 8375.00 IOPS, 32.71 MiB/s [2024-11-19T15:42:00.155Z] 8504.00 IOPS, 33.22 MiB/s [2024-11-19T15:42:00.155Z] 16:41:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 398681 00:36:09.816 16:41:59 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:09.816 [2024-11-19 16:41:59.800401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:47432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:47456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:47464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.800986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:47480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.800999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.801014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:47040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.816 [2024-11-19 16:41:59.801027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.801042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:47488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.801079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.801098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:47496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.801112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.801128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.801142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.801157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:47512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.801171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.801186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.801200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.801215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.801229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.816 [2024-11-19 16:41:59.801244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:47536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.816 [2024-11-19 16:41:59.801258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:47576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:47584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:47592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:47048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.817 [2024-11-19 16:41:59.801560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:47056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.817 [2024-11-19 16:41:59.801587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:47064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.817 [2024-11-19 16:41:59.801614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:47072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.817 [2024-11-19 16:41:59.801641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:47080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.817 [2024-11-19 16:41:59.801667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.817 [2024-11-19 16:41:59.801696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.817 [2024-11-19 16:41:59.801724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:47608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:47616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:47624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:47632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:47656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:47664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:47672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.801970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.801988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:47680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:47688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:47696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:47720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:47736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:47744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:47768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:47776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:47792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.817 [2024-11-19 16:41:59.802457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.817 [2024-11-19 16:41:59.802474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:47800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:47816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:47824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:47832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:47848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:47856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:47864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:47872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:47888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:47896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:47904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:47920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:47928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:47936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.802979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.802991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:47952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.803017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:47960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.803043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.803097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.803126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.818 [2024-11-19 16:41:59.803155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:47112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:47120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:47168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.818 [2024-11-19 16:41:59.803632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.818 [2024-11-19 16:41:59.803645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:47992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.819 [2024-11-19 16:41:59.803657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:48000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.819 [2024-11-19 16:41:59.803697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:48008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.819 [2024-11-19 16:41:59.803724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:48016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.819 [2024-11-19 16:41:59.803749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:48024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.819 [2024-11-19 16:41:59.803782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:48032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.819 [2024-11-19 16:41:59.803808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:48040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.819 [2024-11-19 16:41:59.803834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:48048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.819 [2024-11-19 16:41:59.803860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:48056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:09.819 [2024-11-19 16:41:59.803886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:47232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.803912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.803941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:47248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.803968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.803982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.803994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:47264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:47272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:47280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:47288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:47296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:47304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:47312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:47320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:47328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:47336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:09.819 [2024-11-19 16:41:59.804352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.804366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa8cf20 is same with the state(6) to be set 00:36:09.819 [2024-11-19 16:41:59.804398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:09.819 [2024-11-19 16:41:59.804410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:09.819 [2024-11-19 16:41:59.804422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:47352 len:8 PRP1 0x0 PRP2 0x0 00:36:09.819 [2024-11-19 16:41:59.804435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:09.819 [2024-11-19 16:41:59.807686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.819 [2024-11-19 16:41:59.807764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.819 [2024-11-19 16:41:59.808605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.819 [2024-11-19 16:41:59.808636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.819 [2024-11-19 16:41:59.808652] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.819 [2024-11-19 16:41:59.808891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.819 [2024-11-19 16:41:59.809135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.819 [2024-11-19 16:41:59.809157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.819 [2024-11-19 16:41:59.809173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.819 [2024-11-19 16:41:59.809189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.819 [2024-11-19 16:41:59.821396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.819 [2024-11-19 16:41:59.821756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.819 [2024-11-19 16:41:59.821784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.819 [2024-11-19 16:41:59.821800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.819 [2024-11-19 16:41:59.822027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.819 [2024-11-19 16:41:59.822273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.819 [2024-11-19 16:41:59.822294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.819 [2024-11-19 16:41:59.822307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.819 [2024-11-19 16:41:59.822319] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.819 [2024-11-19 16:41:59.834704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.819 [2024-11-19 16:41:59.835143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.819 [2024-11-19 16:41:59.835172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.819 [2024-11-19 16:41:59.835193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.819 [2024-11-19 16:41:59.835426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.819 [2024-11-19 16:41:59.835662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.819 [2024-11-19 16:41:59.835697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.819 [2024-11-19 16:41:59.835710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.819 [2024-11-19 16:41:59.835721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.819 [2024-11-19 16:41:59.847986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.819 [2024-11-19 16:41:59.848423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.820 [2024-11-19 16:41:59.848467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.820 [2024-11-19 16:41:59.848483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.820 [2024-11-19 16:41:59.848740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.820 [2024-11-19 16:41:59.848951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.820 [2024-11-19 16:41:59.848970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.820 [2024-11-19 16:41:59.848983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.820 [2024-11-19 16:41:59.848994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.820 [2024-11-19 16:41:59.861169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.820 [2024-11-19 16:41:59.861508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.820 [2024-11-19 16:41:59.861535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.820 [2024-11-19 16:41:59.861551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.820 [2024-11-19 16:41:59.861810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.820 [2024-11-19 16:41:59.862023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.820 [2024-11-19 16:41:59.862042] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.820 [2024-11-19 16:41:59.862079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.820 [2024-11-19 16:41:59.862093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.820 [2024-11-19 16:41:59.874468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.820 [2024-11-19 16:41:59.874840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.820 [2024-11-19 16:41:59.874882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.820 [2024-11-19 16:41:59.874899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.820 [2024-11-19 16:41:59.875165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.820 [2024-11-19 16:41:59.875390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.820 [2024-11-19 16:41:59.875424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.820 [2024-11-19 16:41:59.875436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.820 [2024-11-19 16:41:59.875448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.820 [2024-11-19 16:41:59.887712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.820 [2024-11-19 16:41:59.888118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.820 [2024-11-19 16:41:59.888161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.820 [2024-11-19 16:41:59.888176] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.820 [2024-11-19 16:41:59.888434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.820 [2024-11-19 16:41:59.888628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.820 [2024-11-19 16:41:59.888646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.820 [2024-11-19 16:41:59.888658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.820 [2024-11-19 16:41:59.888670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.820 [2024-11-19 16:41:59.900991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.820 [2024-11-19 16:41:59.901396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.820 [2024-11-19 16:41:59.901425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.820 [2024-11-19 16:41:59.901441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.820 [2024-11-19 16:41:59.901683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.820 [2024-11-19 16:41:59.901877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.820 [2024-11-19 16:41:59.901895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.820 [2024-11-19 16:41:59.901907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.820 [2024-11-19 16:41:59.901919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.820 [2024-11-19 16:41:59.914113] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.820 [2024-11-19 16:41:59.914539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.820 [2024-11-19 16:41:59.914581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.820 [2024-11-19 16:41:59.914598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.820 [2024-11-19 16:41:59.914838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.820 [2024-11-19 16:41:59.915047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.820 [2024-11-19 16:41:59.915066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.820 [2024-11-19 16:41:59.915104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.820 [2024-11-19 16:41:59.915121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.820 [2024-11-19 16:41:59.927306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.820 [2024-11-19 16:41:59.927656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.820 [2024-11-19 16:41:59.927698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.820 [2024-11-19 16:41:59.927714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.820 [2024-11-19 16:41:59.927968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.820 [2024-11-19 16:41:59.928207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.820 [2024-11-19 16:41:59.928227] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.820 [2024-11-19 16:41:59.928240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.820 [2024-11-19 16:41:59.928252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.820 [2024-11-19 16:41:59.940396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.820 [2024-11-19 16:41:59.940906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.820 [2024-11-19 16:41:59.940933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.820 [2024-11-19 16:41:59.940964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.820 [2024-11-19 16:41:59.941234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.820 [2024-11-19 16:41:59.941466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.820 [2024-11-19 16:41:59.941485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.820 [2024-11-19 16:41:59.941497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.820 [2024-11-19 16:41:59.941509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.820 [2024-11-19 16:41:59.953617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.820 [2024-11-19 16:41:59.954123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.820 [2024-11-19 16:41:59.954150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.820 [2024-11-19 16:41:59.954166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.820 [2024-11-19 16:41:59.954417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.821 [2024-11-19 16:41:59.954629] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.821 [2024-11-19 16:41:59.954648] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.821 [2024-11-19 16:41:59.954660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.821 [2024-11-19 16:41:59.954671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.821 [2024-11-19 16:41:59.966797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.821 [2024-11-19 16:41:59.967205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.821 [2024-11-19 16:41:59.967234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.821 [2024-11-19 16:41:59.967250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.821 [2024-11-19 16:41:59.967496] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.821 [2024-11-19 16:41:59.967708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.821 [2024-11-19 16:41:59.967727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.821 [2024-11-19 16:41:59.967739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.821 [2024-11-19 16:41:59.967750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.821 [2024-11-19 16:41:59.980036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.821 [2024-11-19 16:41:59.980499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.821 [2024-11-19 16:41:59.980540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.821 [2024-11-19 16:41:59.980556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.821 [2024-11-19 16:41:59.980795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.821 [2024-11-19 16:41:59.980989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.821 [2024-11-19 16:41:59.981007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.821 [2024-11-19 16:41:59.981020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.821 [2024-11-19 16:41:59.981031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.821 [2024-11-19 16:41:59.993449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.821 [2024-11-19 16:41:59.993832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.821 [2024-11-19 16:41:59.993873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.821 [2024-11-19 16:41:59.993889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.821 [2024-11-19 16:41:59.994122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.821 [2024-11-19 16:41:59.994338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.821 [2024-11-19 16:41:59.994357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.821 [2024-11-19 16:41:59.994370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.821 [2024-11-19 16:41:59.994396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.821 [2024-11-19 16:42:00.006984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.821 [2024-11-19 16:42:00.007361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.821 [2024-11-19 16:42:00.007393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.821 [2024-11-19 16:42:00.007410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.821 [2024-11-19 16:42:00.007647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.821 [2024-11-19 16:42:00.007864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.821 [2024-11-19 16:42:00.007885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.821 [2024-11-19 16:42:00.007914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.821 [2024-11-19 16:42:00.007926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.821 [2024-11-19 16:42:00.020680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.821 [2024-11-19 16:42:00.021031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.821 [2024-11-19 16:42:00.021083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.821 [2024-11-19 16:42:00.021103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.821 [2024-11-19 16:42:00.021321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.821 [2024-11-19 16:42:00.021579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.821 [2024-11-19 16:42:00.021600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.821 [2024-11-19 16:42:00.021614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.821 [2024-11-19 16:42:00.021628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.821 [2024-11-19 16:42:00.034461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.821 [2024-11-19 16:42:00.034937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.821 [2024-11-19 16:42:00.035012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.821 [2024-11-19 16:42:00.035030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.821 [2024-11-19 16:42:00.035255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.821 [2024-11-19 16:42:00.035497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.821 [2024-11-19 16:42:00.035533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.821 [2024-11-19 16:42:00.035546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.821 [2024-11-19 16:42:00.035558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.821 [2024-11-19 16:42:00.048152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.821 [2024-11-19 16:42:00.048513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.821 [2024-11-19 16:42:00.048559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.821 [2024-11-19 16:42:00.048589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.821 [2024-11-19 16:42:00.048902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.821 [2024-11-19 16:42:00.049137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.821 [2024-11-19 16:42:00.049168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.821 [2024-11-19 16:42:00.049183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.821 [2024-11-19 16:42:00.049197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.821 [2024-11-19 16:42:00.061796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.821 [2024-11-19 16:42:00.062174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.821 [2024-11-19 16:42:00.062203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.821 [2024-11-19 16:42:00.062220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.821 [2024-11-19 16:42:00.062450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.821 [2024-11-19 16:42:00.062682] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.821 [2024-11-19 16:42:00.062703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.821 [2024-11-19 16:42:00.062716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.821 [2024-11-19 16:42:00.062753] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.821 [2024-11-19 16:42:00.075584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.821 [2024-11-19 16:42:00.075932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.821 [2024-11-19 16:42:00.075960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.821 [2024-11-19 16:42:00.075977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.821 [2024-11-19 16:42:00.076216] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.821 [2024-11-19 16:42:00.076465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.821 [2024-11-19 16:42:00.076485] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.821 [2024-11-19 16:42:00.076499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.821 [2024-11-19 16:42:00.076525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.821 [2024-11-19 16:42:00.088944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.821 [2024-11-19 16:42:00.089292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.821 [2024-11-19 16:42:00.089321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.821 [2024-11-19 16:42:00.089338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.822 [2024-11-19 16:42:00.089620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.822 [2024-11-19 16:42:00.089826] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.822 [2024-11-19 16:42:00.089845] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.822 [2024-11-19 16:42:00.089858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.822 [2024-11-19 16:42:00.089874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.822 [2024-11-19 16:42:00.102539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.822 [2024-11-19 16:42:00.102985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.822 [2024-11-19 16:42:00.103013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.822 [2024-11-19 16:42:00.103030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.822 [2024-11-19 16:42:00.103272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.822 [2024-11-19 16:42:00.103504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.822 [2024-11-19 16:42:00.103524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.822 [2024-11-19 16:42:00.103538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.822 [2024-11-19 16:42:00.103550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.822 [2024-11-19 16:42:00.115917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.822 [2024-11-19 16:42:00.116315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.822 [2024-11-19 16:42:00.116343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.822 [2024-11-19 16:42:00.116369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.822 [2024-11-19 16:42:00.116622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.822 [2024-11-19 16:42:00.116842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.822 [2024-11-19 16:42:00.116862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.822 [2024-11-19 16:42:00.116875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.822 [2024-11-19 16:42:00.116887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.822 [2024-11-19 16:42:00.129269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.822 [2024-11-19 16:42:00.129662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.822 [2024-11-19 16:42:00.129705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.822 [2024-11-19 16:42:00.129721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.822 [2024-11-19 16:42:00.129992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.822 [2024-11-19 16:42:00.130244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.822 [2024-11-19 16:42:00.130267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.822 [2024-11-19 16:42:00.130281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.822 [2024-11-19 16:42:00.130294] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.822 [2024-11-19 16:42:00.142806] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.822 [2024-11-19 16:42:00.143208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.822 [2024-11-19 16:42:00.143236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:09.822 [2024-11-19 16:42:00.143252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:09.822 [2024-11-19 16:42:00.143481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:09.822 [2024-11-19 16:42:00.143715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.822 [2024-11-19 16:42:00.143735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.822 [2024-11-19 16:42:00.143747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.822 [2024-11-19 16:42:00.143759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.083 [2024-11-19 16:42:00.156187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.083 [2024-11-19 16:42:00.156581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-11-19 16:42:00.156609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.083 [2024-11-19 16:42:00.156625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.083 [2024-11-19 16:42:00.156868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.083 [2024-11-19 16:42:00.157092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.083 [2024-11-19 16:42:00.157112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.083 [2024-11-19 16:42:00.157126] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.083 [2024-11-19 16:42:00.157138] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.083 [2024-11-19 16:42:00.169649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.083 [2024-11-19 16:42:00.169998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-11-19 16:42:00.170027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.083 [2024-11-19 16:42:00.170043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.083 [2024-11-19 16:42:00.170267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.083 [2024-11-19 16:42:00.170521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.083 [2024-11-19 16:42:00.170541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.083 [2024-11-19 16:42:00.170554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.083 [2024-11-19 16:42:00.170565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.083 7421.33 IOPS, 28.99 MiB/s [2024-11-19T15:42:00.422Z] [2024-11-19 16:42:00.184632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.083 [2024-11-19 16:42:00.185080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-11-19 16:42:00.185124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.083 [2024-11-19 16:42:00.185140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.083 [2024-11-19 16:42:00.185387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.083 [2024-11-19 16:42:00.185618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.083 [2024-11-19 16:42:00.185637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.083 [2024-11-19 16:42:00.185650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.083 [2024-11-19 16:42:00.185662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.083 [2024-11-19 16:42:00.197919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.083 [2024-11-19 16:42:00.198307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-11-19 16:42:00.198336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.083 [2024-11-19 16:42:00.198353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.083 [2024-11-19 16:42:00.198607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.083 [2024-11-19 16:42:00.198800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.083 [2024-11-19 16:42:00.198819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.083 [2024-11-19 16:42:00.198832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.083 [2024-11-19 16:42:00.198844] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.083 [2024-11-19 16:42:00.211121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.083 [2024-11-19 16:42:00.211601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-11-19 16:42:00.211651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.083 [2024-11-19 16:42:00.211668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.083 [2024-11-19 16:42:00.211913] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.083 [2024-11-19 16:42:00.212132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.083 [2024-11-19 16:42:00.212152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.083 [2024-11-19 16:42:00.212165] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.083 [2024-11-19 16:42:00.212178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.083 [2024-11-19 16:42:00.224509] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.083 [2024-11-19 16:42:00.224953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-11-19 16:42:00.225003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.083 [2024-11-19 16:42:00.225019] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.083 [2024-11-19 16:42:00.225281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.083 [2024-11-19 16:42:00.225499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.083 [2024-11-19 16:42:00.225523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.083 [2024-11-19 16:42:00.225537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.083 [2024-11-19 16:42:00.225548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.083 [2024-11-19 16:42:00.237837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.083 [2024-11-19 16:42:00.238229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-11-19 16:42:00.238258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.083 [2024-11-19 16:42:00.238274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.083 [2024-11-19 16:42:00.238517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.083 [2024-11-19 16:42:00.238726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.083 [2024-11-19 16:42:00.238745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.083 [2024-11-19 16:42:00.238757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.083 [2024-11-19 16:42:00.238768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.083 [2024-11-19 16:42:00.251027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.083 [2024-11-19 16:42:00.251460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-11-19 16:42:00.251513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.083 [2024-11-19 16:42:00.251530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.083 [2024-11-19 16:42:00.251771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.083 [2024-11-19 16:42:00.251980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.083 [2024-11-19 16:42:00.252000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.083 [2024-11-19 16:42:00.252013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.083 [2024-11-19 16:42:00.252024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.083 [2024-11-19 16:42:00.264306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.083 [2024-11-19 16:42:00.264691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.083 [2024-11-19 16:42:00.264734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.083 [2024-11-19 16:42:00.264751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.084 [2024-11-19 16:42:00.265017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.084 [2024-11-19 16:42:00.265241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.084 [2024-11-19 16:42:00.265261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.084 [2024-11-19 16:42:00.265274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.084 [2024-11-19 16:42:00.265291] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.084 [2024-11-19 16:42:00.277474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.084 [2024-11-19 16:42:00.277963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-11-19 16:42:00.278004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.084 [2024-11-19 16:42:00.278021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.084 [2024-11-19 16:42:00.278265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.084 [2024-11-19 16:42:00.278465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.084 [2024-11-19 16:42:00.278484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.084 [2024-11-19 16:42:00.278497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.084 [2024-11-19 16:42:00.278508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.084 [2024-11-19 16:42:00.290884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.084 [2024-11-19 16:42:00.291318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-11-19 16:42:00.291347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.084 [2024-11-19 16:42:00.291363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.084 [2024-11-19 16:42:00.291605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.084 [2024-11-19 16:42:00.291805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.084 [2024-11-19 16:42:00.291824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.084 [2024-11-19 16:42:00.291836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.084 [2024-11-19 16:42:00.291848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.084 [2024-11-19 16:42:00.304173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.084 [2024-11-19 16:42:00.304505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-11-19 16:42:00.304531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.084 [2024-11-19 16:42:00.304546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.084 [2024-11-19 16:42:00.304747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.084 [2024-11-19 16:42:00.304973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.084 [2024-11-19 16:42:00.304992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.084 [2024-11-19 16:42:00.305005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.084 [2024-11-19 16:42:00.305016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.084 [2024-11-19 16:42:00.317502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.084 [2024-11-19 16:42:00.317956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-11-19 16:42:00.317985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.084 [2024-11-19 16:42:00.318001] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.084 [2024-11-19 16:42:00.318240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.084 [2024-11-19 16:42:00.318487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.084 [2024-11-19 16:42:00.318508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.084 [2024-11-19 16:42:00.318535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.084 [2024-11-19 16:42:00.318550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.084 [2024-11-19 16:42:00.330773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.084 [2024-11-19 16:42:00.331217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-11-19 16:42:00.331245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.084 [2024-11-19 16:42:00.331262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.084 [2024-11-19 16:42:00.331502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.084 [2024-11-19 16:42:00.331712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.084 [2024-11-19 16:42:00.331731] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.084 [2024-11-19 16:42:00.331743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.084 [2024-11-19 16:42:00.331755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.084 [2024-11-19 16:42:00.344157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.084 [2024-11-19 16:42:00.344657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-11-19 16:42:00.344698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.084 [2024-11-19 16:42:00.344714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.084 [2024-11-19 16:42:00.344946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.084 [2024-11-19 16:42:00.345200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.084 [2024-11-19 16:42:00.345222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.084 [2024-11-19 16:42:00.345236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.084 [2024-11-19 16:42:00.345248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.084 [2024-11-19 16:42:00.357415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.084 [2024-11-19 16:42:00.357748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-11-19 16:42:00.357776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.084 [2024-11-19 16:42:00.357792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.084 [2024-11-19 16:42:00.358019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.084 [2024-11-19 16:42:00.358270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.084 [2024-11-19 16:42:00.358292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.084 [2024-11-19 16:42:00.358306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.084 [2024-11-19 16:42:00.358318] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.084 [2024-11-19 16:42:00.370665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.084 [2024-11-19 16:42:00.371099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-11-19 16:42:00.371126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.084 [2024-11-19 16:42:00.371142] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.084 [2024-11-19 16:42:00.371372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.084 [2024-11-19 16:42:00.371606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.084 [2024-11-19 16:42:00.371625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.084 [2024-11-19 16:42:00.371638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.084 [2024-11-19 16:42:00.371649] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.084 [2024-11-19 16:42:00.384082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.084 [2024-11-19 16:42:00.384434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.084 [2024-11-19 16:42:00.384463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.085 [2024-11-19 16:42:00.384480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.085 [2024-11-19 16:42:00.384710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.085 [2024-11-19 16:42:00.384931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.085 [2024-11-19 16:42:00.384951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.085 [2024-11-19 16:42:00.384964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.085 [2024-11-19 16:42:00.384977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.085 [2024-11-19 16:42:00.397435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.085 [2024-11-19 16:42:00.397804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-11-19 16:42:00.397832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.085 [2024-11-19 16:42:00.397848] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.085 [2024-11-19 16:42:00.398088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.085 [2024-11-19 16:42:00.398320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.085 [2024-11-19 16:42:00.398344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.085 [2024-11-19 16:42:00.398358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.085 [2024-11-19 16:42:00.398370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.085 [2024-11-19 16:42:00.410794] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.085 [2024-11-19 16:42:00.411174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.085 [2024-11-19 16:42:00.411202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.085 [2024-11-19 16:42:00.411219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.085 [2024-11-19 16:42:00.411463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.085 [2024-11-19 16:42:00.411669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.085 [2024-11-19 16:42:00.411689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.085 [2024-11-19 16:42:00.411702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.085 [2024-11-19 16:42:00.411714] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.345 [2024-11-19 16:42:00.424395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.345 [2024-11-19 16:42:00.424827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.345 [2024-11-19 16:42:00.424855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.345 [2024-11-19 16:42:00.424871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.345 [2024-11-19 16:42:00.425112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.345 [2024-11-19 16:42:00.425325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.345 [2024-11-19 16:42:00.425346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.345 [2024-11-19 16:42:00.425375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.345 [2024-11-19 16:42:00.425388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.345 [2024-11-19 16:42:00.437746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.345 [2024-11-19 16:42:00.438091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.345 [2024-11-19 16:42:00.438119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.345 [2024-11-19 16:42:00.438136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.345 [2024-11-19 16:42:00.438365] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.345 [2024-11-19 16:42:00.438586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.345 [2024-11-19 16:42:00.438606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.345 [2024-11-19 16:42:00.438619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.345 [2024-11-19 16:42:00.438654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.345 [2024-11-19 16:42:00.451318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.345 [2024-11-19 16:42:00.451656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.345 [2024-11-19 16:42:00.451696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.345 [2024-11-19 16:42:00.451712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.345 [2024-11-19 16:42:00.451920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.345 [2024-11-19 16:42:00.452179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.345 [2024-11-19 16:42:00.452201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.345 [2024-11-19 16:42:00.452215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.345 [2024-11-19 16:42:00.452228] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.345 [2024-11-19 16:42:00.464765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.345 [2024-11-19 16:42:00.465181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.346 [2024-11-19 16:42:00.465209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.346 [2024-11-19 16:42:00.465226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.346 [2024-11-19 16:42:00.465456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.346 [2024-11-19 16:42:00.465678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.346 [2024-11-19 16:42:00.465698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.346 [2024-11-19 16:42:00.465711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.346 [2024-11-19 16:42:00.465723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.346 [2024-11-19 16:42:00.478143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.346 [2024-11-19 16:42:00.478575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.346 [2024-11-19 16:42:00.478618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.346 [2024-11-19 16:42:00.478635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.346 [2024-11-19 16:42:00.478879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.346 [2024-11-19 16:42:00.479106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.346 [2024-11-19 16:42:00.479142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.346 [2024-11-19 16:42:00.479157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.346 [2024-11-19 16:42:00.479171] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.346 [2024-11-19 16:42:00.491534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.346 [2024-11-19 16:42:00.491952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.346 [2024-11-19 16:42:00.491998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.346 [2024-11-19 16:42:00.492015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.346 [2024-11-19 16:42:00.492265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.346 [2024-11-19 16:42:00.492492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.346 [2024-11-19 16:42:00.492512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.346 [2024-11-19 16:42:00.492525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.346 [2024-11-19 16:42:00.492553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.346 [2024-11-19 16:42:00.504817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.346 [2024-11-19 16:42:00.505208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.346 [2024-11-19 16:42:00.505236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.346 [2024-11-19 16:42:00.505253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.346 [2024-11-19 16:42:00.505484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.346 [2024-11-19 16:42:00.505699] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.346 [2024-11-19 16:42:00.505718] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.346 [2024-11-19 16:42:00.505731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.346 [2024-11-19 16:42:00.505742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.346 [2024-11-19 16:42:00.518401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.346 [2024-11-19 16:42:00.518729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.346 [2024-11-19 16:42:00.518756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.346 [2024-11-19 16:42:00.518772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.346 [2024-11-19 16:42:00.518986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.346 [2024-11-19 16:42:00.519237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.346 [2024-11-19 16:42:00.519259] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.346 [2024-11-19 16:42:00.519273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.346 [2024-11-19 16:42:00.519286] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.346 [2024-11-19 16:42:00.531685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.346 [2024-11-19 16:42:00.532117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.346 [2024-11-19 16:42:00.532145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.346 [2024-11-19 16:42:00.532162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.346 [2024-11-19 16:42:00.532410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.346 [2024-11-19 16:42:00.532623] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.346 [2024-11-19 16:42:00.532643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.346 [2024-11-19 16:42:00.532656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.346 [2024-11-19 16:42:00.532668] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.346 [2024-11-19 16:42:00.544943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.346 [2024-11-19 16:42:00.545344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.346 [2024-11-19 16:42:00.545387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.346 [2024-11-19 16:42:00.545402] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.346 [2024-11-19 16:42:00.545650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.346 [2024-11-19 16:42:00.545865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.346 [2024-11-19 16:42:00.545893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.346 [2024-11-19 16:42:00.545906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.346 [2024-11-19 16:42:00.545918] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.346 [2024-11-19 16:42:00.558173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.346 [2024-11-19 16:42:00.558568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.346 [2024-11-19 16:42:00.558597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.346 [2024-11-19 16:42:00.558613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.346 [2024-11-19 16:42:00.558858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.346 [2024-11-19 16:42:00.559057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.346 [2024-11-19 16:42:00.559100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.346 [2024-11-19 16:42:00.559114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.346 [2024-11-19 16:42:00.559126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.346 [2024-11-19 16:42:00.571472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.346 [2024-11-19 16:42:00.571844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.346 [2024-11-19 16:42:00.571872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.346 [2024-11-19 16:42:00.571888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.346 [2024-11-19 16:42:00.572114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.346 [2024-11-19 16:42:00.572334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.346 [2024-11-19 16:42:00.572360] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.346 [2024-11-19 16:42:00.572375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.346 [2024-11-19 16:42:00.572388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.347 [2024-11-19 16:42:00.584817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.347 [2024-11-19 16:42:00.585188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.347 [2024-11-19 16:42:00.585217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.347 [2024-11-19 16:42:00.585233] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.347 [2024-11-19 16:42:00.585464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.347 [2024-11-19 16:42:00.585679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.347 [2024-11-19 16:42:00.585699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.347 [2024-11-19 16:42:00.585711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.347 [2024-11-19 16:42:00.585723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.347 [2024-11-19 16:42:00.598145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.347 [2024-11-19 16:42:00.598599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.347 [2024-11-19 16:42:00.598641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.347 [2024-11-19 16:42:00.598658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.347 [2024-11-19 16:42:00.598899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.347 [2024-11-19 16:42:00.599138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.347 [2024-11-19 16:42:00.599158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.347 [2024-11-19 16:42:00.599172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.347 [2024-11-19 16:42:00.599184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.347 [2024-11-19 16:42:00.611648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.347 [2024-11-19 16:42:00.612044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.347 [2024-11-19 16:42:00.612080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.347 [2024-11-19 16:42:00.612098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.347 [2024-11-19 16:42:00.612328] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.347 [2024-11-19 16:42:00.612576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.347 [2024-11-19 16:42:00.612596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.347 [2024-11-19 16:42:00.612609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.347 [2024-11-19 16:42:00.612622] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.347 [2024-11-19 16:42:00.625134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.347 [2024-11-19 16:42:00.625505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.347 [2024-11-19 16:42:00.625533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.347 [2024-11-19 16:42:00.625549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.347 [2024-11-19 16:42:00.625779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.347 [2024-11-19 16:42:00.626015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.347 [2024-11-19 16:42:00.626035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.347 [2024-11-19 16:42:00.626062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.347 [2024-11-19 16:42:00.626086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.347 [2024-11-19 16:42:00.638588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.347 [2024-11-19 16:42:00.638959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.347 [2024-11-19 16:42:00.638987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.347 [2024-11-19 16:42:00.639004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.347 [2024-11-19 16:42:00.639231] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.347 [2024-11-19 16:42:00.639479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.347 [2024-11-19 16:42:00.639499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.347 [2024-11-19 16:42:00.639513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.347 [2024-11-19 16:42:00.639525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.347 [2024-11-19 16:42:00.651963] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.347 [2024-11-19 16:42:00.652336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.347 [2024-11-19 16:42:00.652365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.347 [2024-11-19 16:42:00.652382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.347 [2024-11-19 16:42:00.652613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.347 [2024-11-19 16:42:00.652828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.347 [2024-11-19 16:42:00.652848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.347 [2024-11-19 16:42:00.652861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.347 [2024-11-19 16:42:00.652872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.347 [2024-11-19 16:42:00.665383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.347 [2024-11-19 16:42:00.665765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.347 [2024-11-19 16:42:00.665811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.347 [2024-11-19 16:42:00.665829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.347 [2024-11-19 16:42:00.666078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.347 [2024-11-19 16:42:00.666306] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.347 [2024-11-19 16:42:00.666326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.347 [2024-11-19 16:42:00.666340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.347 [2024-11-19 16:42:00.666367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.347 [2024-11-19 16:42:00.678967] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.610 [2024-11-19 16:42:00.679536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.610 [2024-11-19 16:42:00.679578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.610 [2024-11-19 16:42:00.679594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.610 [2024-11-19 16:42:00.679830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.610 [2024-11-19 16:42:00.680061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.610 [2024-11-19 16:42:00.680104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.610 [2024-11-19 16:42:00.680119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.610 [2024-11-19 16:42:00.680133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.610 [2024-11-19 16:42:00.692586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.610 [2024-11-19 16:42:00.693016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.610 [2024-11-19 16:42:00.693045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.610 [2024-11-19 16:42:00.693061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.610 [2024-11-19 16:42:00.693286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.610 [2024-11-19 16:42:00.693521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.610 [2024-11-19 16:42:00.693539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.610 [2024-11-19 16:42:00.693552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.610 [2024-11-19 16:42:00.693564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.610 [2024-11-19 16:42:00.705917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.610 [2024-11-19 16:42:00.706275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.610 [2024-11-19 16:42:00.706304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.610 [2024-11-19 16:42:00.706320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.610 [2024-11-19 16:42:00.706550] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.610 [2024-11-19 16:42:00.706779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.610 [2024-11-19 16:42:00.706799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.610 [2024-11-19 16:42:00.706811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.610 [2024-11-19 16:42:00.706823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.610 [2024-11-19 16:42:00.719287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.610 [2024-11-19 16:42:00.719681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.610 [2024-11-19 16:42:00.719709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.610 [2024-11-19 16:42:00.719725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.610 [2024-11-19 16:42:00.719967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.610 [2024-11-19 16:42:00.720216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.610 [2024-11-19 16:42:00.720238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.610 [2024-11-19 16:42:00.720251] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.610 [2024-11-19 16:42:00.720264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.610 [2024-11-19 16:42:00.732658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.610 [2024-11-19 16:42:00.733028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.610 [2024-11-19 16:42:00.733080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.610 [2024-11-19 16:42:00.733098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.610 [2024-11-19 16:42:00.733342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.610 [2024-11-19 16:42:00.733576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.610 [2024-11-19 16:42:00.733596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.610 [2024-11-19 16:42:00.733609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.610 [2024-11-19 16:42:00.733635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.610 [2024-11-19 16:42:00.745962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.610 [2024-11-19 16:42:00.746319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.610 [2024-11-19 16:42:00.746362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.610 [2024-11-19 16:42:00.746379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.610 [2024-11-19 16:42:00.746636] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.610 [2024-11-19 16:42:00.746851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.610 [2024-11-19 16:42:00.746871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.610 [2024-11-19 16:42:00.746889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.610 [2024-11-19 16:42:00.746916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.610 [2024-11-19 16:42:00.759311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.610 [2024-11-19 16:42:00.759662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.610 [2024-11-19 16:42:00.759703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.610 [2024-11-19 16:42:00.759720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.610 [2024-11-19 16:42:00.759962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.610 [2024-11-19 16:42:00.760199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.610 [2024-11-19 16:42:00.760219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.610 [2024-11-19 16:42:00.760232] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.610 [2024-11-19 16:42:00.760243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.610 [2024-11-19 16:42:00.772414] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.610 [2024-11-19 16:42:00.772854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.610 [2024-11-19 16:42:00.772883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.610 [2024-11-19 16:42:00.772899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.610 [2024-11-19 16:42:00.773137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.610 [2024-11-19 16:42:00.773351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.610 [2024-11-19 16:42:00.773386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.610 [2024-11-19 16:42:00.773398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.610 [2024-11-19 16:42:00.773410] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.610 [2024-11-19 16:42:00.785722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.611 [2024-11-19 16:42:00.786032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.611 [2024-11-19 16:42:00.786080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.611 [2024-11-19 16:42:00.786098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.611 [2024-11-19 16:42:00.786356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.611 [2024-11-19 16:42:00.786574] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.611 [2024-11-19 16:42:00.786593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.611 [2024-11-19 16:42:00.786606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.611 [2024-11-19 16:42:00.786618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.611 [2024-11-19 16:42:00.799016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.611 [2024-11-19 16:42:00.799458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.611 [2024-11-19 16:42:00.799486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.611 [2024-11-19 16:42:00.799502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.611 [2024-11-19 16:42:00.799733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.611 [2024-11-19 16:42:00.799949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.611 [2024-11-19 16:42:00.799968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.611 [2024-11-19 16:42:00.799980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.611 [2024-11-19 16:42:00.799992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.611 [2024-11-19 16:42:00.812410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.611 [2024-11-19 16:42:00.812784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.611 [2024-11-19 16:42:00.812811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.611 [2024-11-19 16:42:00.812827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.611 [2024-11-19 16:42:00.813079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.611 [2024-11-19 16:42:00.813300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.611 [2024-11-19 16:42:00.813320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.611 [2024-11-19 16:42:00.813333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.611 [2024-11-19 16:42:00.813360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.611 [2024-11-19 16:42:00.825672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.611 [2024-11-19 16:42:00.826098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.611 [2024-11-19 16:42:00.826133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.611 [2024-11-19 16:42:00.826149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.611 [2024-11-19 16:42:00.826364] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.611 [2024-11-19 16:42:00.826638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.611 [2024-11-19 16:42:00.826659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.611 [2024-11-19 16:42:00.826673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.611 [2024-11-19 16:42:00.826686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.611 [2024-11-19 16:42:00.839039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.611 [2024-11-19 16:42:00.839424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.611 [2024-11-19 16:42:00.839452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.611 [2024-11-19 16:42:00.839473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.611 [2024-11-19 16:42:00.839698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.611 [2024-11-19 16:42:00.839909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.611 [2024-11-19 16:42:00.839928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.611 [2024-11-19 16:42:00.839941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.611 [2024-11-19 16:42:00.839952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.611 [2024-11-19 16:42:00.852252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.611 [2024-11-19 16:42:00.852636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.611 [2024-11-19 16:42:00.852680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.611 [2024-11-19 16:42:00.852696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.611 [2024-11-19 16:42:00.852965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.611 [2024-11-19 16:42:00.853207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.611 [2024-11-19 16:42:00.853228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.611 [2024-11-19 16:42:00.853242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.611 [2024-11-19 16:42:00.853255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.611 [2024-11-19 16:42:00.865424] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.611 [2024-11-19 16:42:00.865766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.611 [2024-11-19 16:42:00.865795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.611 [2024-11-19 16:42:00.865811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.611 [2024-11-19 16:42:00.866040] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.611 [2024-11-19 16:42:00.866286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.611 [2024-11-19 16:42:00.866306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.611 [2024-11-19 16:42:00.866319] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.611 [2024-11-19 16:42:00.866332] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.611 [2024-11-19 16:42:00.878687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.611 [2024-11-19 16:42:00.879126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.611 [2024-11-19 16:42:00.879155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.611 [2024-11-19 16:42:00.879172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.611 [2024-11-19 16:42:00.879414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.611 [2024-11-19 16:42:00.879619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.612 [2024-11-19 16:42:00.879638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.612 [2024-11-19 16:42:00.879651] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.612 [2024-11-19 16:42:00.879662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.612 [2024-11-19 16:42:00.891801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.612 [2024-11-19 16:42:00.892292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.612 [2024-11-19 16:42:00.892334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.612 [2024-11-19 16:42:00.892351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.612 [2024-11-19 16:42:00.892600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.612 [2024-11-19 16:42:00.892794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.612 [2024-11-19 16:42:00.892813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.612 [2024-11-19 16:42:00.892825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.612 [2024-11-19 16:42:00.892836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.612 [2024-11-19 16:42:00.905051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.612 [2024-11-19 16:42:00.905448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.612 [2024-11-19 16:42:00.905491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.612 [2024-11-19 16:42:00.905507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.612 [2024-11-19 16:42:00.905743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.612 [2024-11-19 16:42:00.905953] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.612 [2024-11-19 16:42:00.905972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.612 [2024-11-19 16:42:00.905985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.612 [2024-11-19 16:42:00.905996] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.612 [2024-11-19 16:42:00.918341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.612 [2024-11-19 16:42:00.918716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.612 [2024-11-19 16:42:00.918743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.612 [2024-11-19 16:42:00.918759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.612 [2024-11-19 16:42:00.918974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.612 [2024-11-19 16:42:00.919230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.612 [2024-11-19 16:42:00.919252] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.612 [2024-11-19 16:42:00.919270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.612 [2024-11-19 16:42:00.919283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.612 [2024-11-19 16:42:00.931574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.612 [2024-11-19 16:42:00.931949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.612 [2024-11-19 16:42:00.931977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.612 [2024-11-19 16:42:00.931993] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.612 [2024-11-19 16:42:00.932246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.612 [2024-11-19 16:42:00.932464] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.612 [2024-11-19 16:42:00.932484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.612 [2024-11-19 16:42:00.932496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.612 [2024-11-19 16:42:00.932508] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.872 [2024-11-19 16:42:00.944988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.872 [2024-11-19 16:42:00.945443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.872 [2024-11-19 16:42:00.945474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.872 [2024-11-19 16:42:00.945506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.872 [2024-11-19 16:42:00.945750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.872 [2024-11-19 16:42:00.945950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.872 [2024-11-19 16:42:00.945970] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.872 [2024-11-19 16:42:00.945982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.872 [2024-11-19 16:42:00.945994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.872 [2024-11-19 16:42:00.958279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.872 [2024-11-19 16:42:00.958624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.872 [2024-11-19 16:42:00.958652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.872 [2024-11-19 16:42:00.958668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.872 [2024-11-19 16:42:00.958891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.872 [2024-11-19 16:42:00.959152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.872 [2024-11-19 16:42:00.959173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.872 [2024-11-19 16:42:00.959187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.872 [2024-11-19 16:42:00.959199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.873 [2024-11-19 16:42:00.971531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.873 [2024-11-19 16:42:00.971993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.873 [2024-11-19 16:42:00.972021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.873 [2024-11-19 16:42:00.972037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.873 [2024-11-19 16:42:00.972274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.873 [2024-11-19 16:42:00.972509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.873 [2024-11-19 16:42:00.972529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.873 [2024-11-19 16:42:00.972542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.873 [2024-11-19 16:42:00.972553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.873 [2024-11-19 16:42:00.984732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.873 [2024-11-19 16:42:00.985171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.873 [2024-11-19 16:42:00.985200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.873 [2024-11-19 16:42:00.985216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.873 [2024-11-19 16:42:00.985457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.873 [2024-11-19 16:42:00.985666] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.873 [2024-11-19 16:42:00.985685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.873 [2024-11-19 16:42:00.985698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.873 [2024-11-19 16:42:00.985709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.873 [2024-11-19 16:42:00.997983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.873 [2024-11-19 16:42:00.998418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.873 [2024-11-19 16:42:00.998445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.873 [2024-11-19 16:42:00.998461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.873 [2024-11-19 16:42:00.998684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.873 [2024-11-19 16:42:00.998894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.873 [2024-11-19 16:42:00.998913] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.873 [2024-11-19 16:42:00.998925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.873 [2024-11-19 16:42:00.998937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.873 [2024-11-19 16:42:01.011204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.873 [2024-11-19 16:42:01.011586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.873 [2024-11-19 16:42:01.011613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.873 [2024-11-19 16:42:01.011634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.873 [2024-11-19 16:42:01.011871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.873 [2024-11-19 16:42:01.012107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.873 [2024-11-19 16:42:01.012129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.873 [2024-11-19 16:42:01.012142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.873 [2024-11-19 16:42:01.012154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.873 [2024-11-19 16:42:01.024343] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.873 [2024-11-19 16:42:01.024748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.873 [2024-11-19 16:42:01.024776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.873 [2024-11-19 16:42:01.024792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.873 [2024-11-19 16:42:01.025034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.873 [2024-11-19 16:42:01.025263] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.873 [2024-11-19 16:42:01.025284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.873 [2024-11-19 16:42:01.025297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.873 [2024-11-19 16:42:01.025309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.873 [2024-11-19 16:42:01.037508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.873 [2024-11-19 16:42:01.037880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.873 [2024-11-19 16:42:01.037923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.873 [2024-11-19 16:42:01.037939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.873 [2024-11-19 16:42:01.038210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.873 [2024-11-19 16:42:01.038457] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.873 [2024-11-19 16:42:01.038476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.873 [2024-11-19 16:42:01.038490] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.873 [2024-11-19 16:42:01.038501] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.873 [2024-11-19 16:42:01.050824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.873 [2024-11-19 16:42:01.051208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.873 [2024-11-19 16:42:01.051236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.873 [2024-11-19 16:42:01.051252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.873 [2024-11-19 16:42:01.051473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.873 [2024-11-19 16:42:01.051687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.873 [2024-11-19 16:42:01.051706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.873 [2024-11-19 16:42:01.051718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.873 [2024-11-19 16:42:01.051730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.873 [2024-11-19 16:42:01.064023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.873 [2024-11-19 16:42:01.064420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.873 [2024-11-19 16:42:01.064463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.873 [2024-11-19 16:42:01.064480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.873 [2024-11-19 16:42:01.064733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.873 [2024-11-19 16:42:01.064942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.873 [2024-11-19 16:42:01.064961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.873 [2024-11-19 16:42:01.064973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.873 [2024-11-19 16:42:01.064984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.873 [2024-11-19 16:42:01.077171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.873 [2024-11-19 16:42:01.077592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.873 [2024-11-19 16:42:01.077620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.873 [2024-11-19 16:42:01.077637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.873 [2024-11-19 16:42:01.077868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.873 [2024-11-19 16:42:01.078125] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.873 [2024-11-19 16:42:01.078147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.874 [2024-11-19 16:42:01.078161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.874 [2024-11-19 16:42:01.078174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.874 [2024-11-19 16:42:01.090506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.874 [2024-11-19 16:42:01.090860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.874 [2024-11-19 16:42:01.090888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.874 [2024-11-19 16:42:01.090904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.874 [2024-11-19 16:42:01.091156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.874 [2024-11-19 16:42:01.091384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.874 [2024-11-19 16:42:01.091405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.874 [2024-11-19 16:42:01.091438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.874 [2024-11-19 16:42:01.091451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.874 [2024-11-19 16:42:01.103876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.874 [2024-11-19 16:42:01.104311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.874 [2024-11-19 16:42:01.104338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.874 [2024-11-19 16:42:01.104368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.874 [2024-11-19 16:42:01.104597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.874 [2024-11-19 16:42:01.104813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.874 [2024-11-19 16:42:01.104832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.874 [2024-11-19 16:42:01.104846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.874 [2024-11-19 16:42:01.104857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.874 [2024-11-19 16:42:01.117475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.874 [2024-11-19 16:42:01.117879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.874 [2024-11-19 16:42:01.117907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.874 [2024-11-19 16:42:01.117923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.874 [2024-11-19 16:42:01.118148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.874 [2024-11-19 16:42:01.118383] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.874 [2024-11-19 16:42:01.118404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.874 [2024-11-19 16:42:01.118432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.874 [2024-11-19 16:42:01.118445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.874 [2024-11-19 16:42:01.131165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.874 [2024-11-19 16:42:01.131658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.874 [2024-11-19 16:42:01.131699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.874 [2024-11-19 16:42:01.131715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.874 [2024-11-19 16:42:01.131960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.874 [2024-11-19 16:42:01.132204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.874 [2024-11-19 16:42:01.132225] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.874 [2024-11-19 16:42:01.132239] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.874 [2024-11-19 16:42:01.132251] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.874 [2024-11-19 16:42:01.144322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.874 [2024-11-19 16:42:01.144775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.874 [2024-11-19 16:42:01.144822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.874 [2024-11-19 16:42:01.144838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.874 [2024-11-19 16:42:01.145081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.874 [2024-11-19 16:42:01.145282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.874 [2024-11-19 16:42:01.145301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.874 [2024-11-19 16:42:01.145314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.874 [2024-11-19 16:42:01.145325] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.874 [2024-11-19 16:42:01.157505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.874 [2024-11-19 16:42:01.157890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.874 [2024-11-19 16:42:01.157938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.874 [2024-11-19 16:42:01.157953] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.874 [2024-11-19 16:42:01.158206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.874 [2024-11-19 16:42:01.158448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.874 [2024-11-19 16:42:01.158468] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.874 [2024-11-19 16:42:01.158481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.874 [2024-11-19 16:42:01.158493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.874 [2024-11-19 16:42:01.170848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.874 [2024-11-19 16:42:01.171240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.874 [2024-11-19 16:42:01.171269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.874 [2024-11-19 16:42:01.171285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.874 [2024-11-19 16:42:01.171527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.874 [2024-11-19 16:42:01.171744] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.874 [2024-11-19 16:42:01.171763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.874 [2024-11-19 16:42:01.171776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.874 [2024-11-19 16:42:01.171787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.874 [2024-11-19 16:42:01.184125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.874 [2024-11-19 16:42:01.184568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.874 [2024-11-19 16:42:01.184597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.874 [2024-11-19 16:42:01.184620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.875 [2024-11-19 16:42:01.184864] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.875 [2024-11-19 16:42:01.185105] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.875 [2024-11-19 16:42:01.185126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.875 [2024-11-19 16:42:01.185139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.875 [2024-11-19 16:42:01.185152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.875 5566.00 IOPS, 21.74 MiB/s [2024-11-19T15:42:01.214Z] [2024-11-19 16:42:01.197242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.875 [2024-11-19 16:42:01.197699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.875 [2024-11-19 16:42:01.197750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:10.875 [2024-11-19 16:42:01.197786] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:10.875 [2024-11-19 16:42:01.198041] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:10.875 [2024-11-19 16:42:01.198287] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.875 [2024-11-19 16:42:01.198308] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.875 [2024-11-19 16:42:01.198322] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.875 [2024-11-19 16:42:01.198334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.135 [2024-11-19 16:42:01.210987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.135 [2024-11-19 16:42:01.211366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.135 [2024-11-19 16:42:01.211403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.135 [2024-11-19 16:42:01.211437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.135 [2024-11-19 16:42:01.211695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.135 [2024-11-19 16:42:01.211931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.135 [2024-11-19 16:42:01.211952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.135 [2024-11-19 16:42:01.211965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.135 [2024-11-19 16:42:01.211992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.135 [2024-11-19 16:42:01.224681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.135 [2024-11-19 16:42:01.225118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.135 [2024-11-19 16:42:01.225156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.135 [2024-11-19 16:42:01.225191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.135 [2024-11-19 16:42:01.225419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.135 [2024-11-19 16:42:01.225641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.135 [2024-11-19 16:42:01.225661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.135 [2024-11-19 16:42:01.225690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.135 [2024-11-19 16:42:01.225702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.135 [2024-11-19 16:42:01.238081] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.135 [2024-11-19 16:42:01.238434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.135 [2024-11-19 16:42:01.238461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.135 [2024-11-19 16:42:01.238477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.135 [2024-11-19 16:42:01.238699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.135 [2024-11-19 16:42:01.238914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.135 [2024-11-19 16:42:01.238933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.135 [2024-11-19 16:42:01.238946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.135 [2024-11-19 16:42:01.238958] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.135 [2024-11-19 16:42:01.251475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.135 [2024-11-19 16:42:01.251849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.135 [2024-11-19 16:42:01.251896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.135 [2024-11-19 16:42:01.251918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.135 [2024-11-19 16:42:01.252191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.135 [2024-11-19 16:42:01.252426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.135 [2024-11-19 16:42:01.252445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.135 [2024-11-19 16:42:01.252458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.135 [2024-11-19 16:42:01.252470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.135 [2024-11-19 16:42:01.264798] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.135 [2024-11-19 16:42:01.265211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.135 [2024-11-19 16:42:01.265239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.135 [2024-11-19 16:42:01.265255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.135 [2024-11-19 16:42:01.265484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.135 [2024-11-19 16:42:01.265700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.135 [2024-11-19 16:42:01.265719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.135 [2024-11-19 16:42:01.265737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.135 [2024-11-19 16:42:01.265749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.135 [2024-11-19 16:42:01.277989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.135 [2024-11-19 16:42:01.278409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.135 [2024-11-19 16:42:01.278458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.135 [2024-11-19 16:42:01.278474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.135 [2024-11-19 16:42:01.278716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.135 [2024-11-19 16:42:01.278926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.135 [2024-11-19 16:42:01.278945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.135 [2024-11-19 16:42:01.278958] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.135 [2024-11-19 16:42:01.278969] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.135 [2024-11-19 16:42:01.291184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.135 [2024-11-19 16:42:01.291632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.135 [2024-11-19 16:42:01.291673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.135 [2024-11-19 16:42:01.291690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.135 [2024-11-19 16:42:01.291931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.135 [2024-11-19 16:42:01.292171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.292192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.292206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.292218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.304458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.304832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.304880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.304896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.305158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.305358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.305391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.305404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.305416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.317697] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.318046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.318081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.318099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.318315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.318550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.318570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.318582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.318594] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.331002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.331376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.331406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.331423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.331653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.331907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.331928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.331941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.331954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.344346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.344715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.344743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.344760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.345004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.345253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.345274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.345288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.345300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.357779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.358159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.358205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.358227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.358491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.358692] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.358712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.358724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.358736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.371260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.371641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.371669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.371686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.371929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.372156] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.372177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.372190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.372202] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.384635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.384979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.385008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.385024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.385249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.385502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.385522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.385534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.385546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.398024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.398420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.398448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.398464] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.398704] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.398902] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.398921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.398933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.398945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.411445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.411808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.411835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.411851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.412095] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.412316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.412337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.412351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.412378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.424540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.425034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.425083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.425101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.425343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.425569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.425589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.425601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.136 [2024-11-19 16:42:01.425613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.136 [2024-11-19 16:42:01.437832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.136 [2024-11-19 16:42:01.438250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.136 [2024-11-19 16:42:01.438278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.136 [2024-11-19 16:42:01.438295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.136 [2024-11-19 16:42:01.438527] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.136 [2024-11-19 16:42:01.438742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.136 [2024-11-19 16:42:01.438762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.136 [2024-11-19 16:42:01.438774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.137 [2024-11-19 16:42:01.438790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.137 [2024-11-19 16:42:01.450971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.137 [2024-11-19 16:42:01.451377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.137 [2024-11-19 16:42:01.451447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.137 [2024-11-19 16:42:01.451463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.137 [2024-11-19 16:42:01.451714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.137 [2024-11-19 16:42:01.451908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.137 [2024-11-19 16:42:01.451926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.137 [2024-11-19 16:42:01.451939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.137 [2024-11-19 16:42:01.451950] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.137 [2024-11-19 16:42:01.464291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.137 [2024-11-19 16:42:01.464604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.137 [2024-11-19 16:42:01.464647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.137 [2024-11-19 16:42:01.464663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.137 [2024-11-19 16:42:01.464885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.137 [2024-11-19 16:42:01.465129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.137 [2024-11-19 16:42:01.465150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.137 [2024-11-19 16:42:01.465164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.137 [2024-11-19 16:42:01.465176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.397 [2024-11-19 16:42:01.477560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.397 [2024-11-19 16:42:01.477954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-11-19 16:42:01.477981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.397 [2024-11-19 16:42:01.477997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.397 [2024-11-19 16:42:01.478266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.397 [2024-11-19 16:42:01.478495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.397 [2024-11-19 16:42:01.478515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.397 [2024-11-19 16:42:01.478528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.397 [2024-11-19 16:42:01.478540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.397 [2024-11-19 16:42:01.490675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.397 [2024-11-19 16:42:01.491047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-11-19 16:42:01.491096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.397 [2024-11-19 16:42:01.491115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.397 [2024-11-19 16:42:01.491344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.397 [2024-11-19 16:42:01.491573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.397 [2024-11-19 16:42:01.491593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.397 [2024-11-19 16:42:01.491605] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.397 [2024-11-19 16:42:01.491617] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.397 [2024-11-19 16:42:01.503933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.397 [2024-11-19 16:42:01.504381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.397 [2024-11-19 16:42:01.504411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.397 [2024-11-19 16:42:01.504443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.397 [2024-11-19 16:42:01.504696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.397 [2024-11-19 16:42:01.504895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.397 [2024-11-19 16:42:01.504914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.398 [2024-11-19 16:42:01.504927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.398 [2024-11-19 16:42:01.504939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.398 [2024-11-19 16:42:01.517210] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.398 [2024-11-19 16:42:01.517628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-11-19 16:42:01.517655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.398 [2024-11-19 16:42:01.517670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.398 [2024-11-19 16:42:01.517893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.398 [2024-11-19 16:42:01.518169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.398 [2024-11-19 16:42:01.518190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.398 [2024-11-19 16:42:01.518204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.398 [2024-11-19 16:42:01.518216] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.398 [2024-11-19 16:42:01.530560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.398 [2024-11-19 16:42:01.530897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-11-19 16:42:01.530924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.398 [2024-11-19 16:42:01.530939] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.398 [2024-11-19 16:42:01.531198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.398 [2024-11-19 16:42:01.531440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.398 [2024-11-19 16:42:01.531460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.398 [2024-11-19 16:42:01.531472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.398 [2024-11-19 16:42:01.531500] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.398 [2024-11-19 16:42:01.543935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.398 [2024-11-19 16:42:01.544347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-11-19 16:42:01.544406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.398 [2024-11-19 16:42:01.544422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.398 [2024-11-19 16:42:01.544669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.398 [2024-11-19 16:42:01.544863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.398 [2024-11-19 16:42:01.544883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.398 [2024-11-19 16:42:01.544895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.398 [2024-11-19 16:42:01.544906] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.398 [2024-11-19 16:42:01.557281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.398 [2024-11-19 16:42:01.557650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-11-19 16:42:01.557679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.398 [2024-11-19 16:42:01.557696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.398 [2024-11-19 16:42:01.557926] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.398 [2024-11-19 16:42:01.558186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.398 [2024-11-19 16:42:01.558209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.398 [2024-11-19 16:42:01.558223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.398 [2024-11-19 16:42:01.558237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.398 [2024-11-19 16:42:01.570471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.398 [2024-11-19 16:42:01.570847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-11-19 16:42:01.570889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.398 [2024-11-19 16:42:01.570905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.398 [2024-11-19 16:42:01.571170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.398 [2024-11-19 16:42:01.571391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.398 [2024-11-19 16:42:01.571416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.398 [2024-11-19 16:42:01.571429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.398 [2024-11-19 16:42:01.571441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.398 [2024-11-19 16:42:01.583841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.398 [2024-11-19 16:42:01.584222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-11-19 16:42:01.584272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.398 [2024-11-19 16:42:01.584289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.398 [2024-11-19 16:42:01.584521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.398 [2024-11-19 16:42:01.584752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.398 [2024-11-19 16:42:01.584774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.398 [2024-11-19 16:42:01.584788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.398 [2024-11-19 16:42:01.584800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.398 [2024-11-19 16:42:01.597132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.398 [2024-11-19 16:42:01.597599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-11-19 16:42:01.597627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.398 [2024-11-19 16:42:01.597644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.398 [2024-11-19 16:42:01.597888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.398 [2024-11-19 16:42:01.598110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.398 [2024-11-19 16:42:01.598132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.398 [2024-11-19 16:42:01.598145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.398 [2024-11-19 16:42:01.598172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.398 [2024-11-19 16:42:01.610254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.398 [2024-11-19 16:42:01.610699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.398 [2024-11-19 16:42:01.610741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.398 [2024-11-19 16:42:01.610758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.398 [2024-11-19 16:42:01.610998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.398 [2024-11-19 16:42:01.611257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.398 [2024-11-19 16:42:01.611279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.398 [2024-11-19 16:42:01.611293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.398 [2024-11-19 16:42:01.611310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.398 [2024-11-19 16:42:01.623622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.398 [2024-11-19 16:42:01.623959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-11-19 16:42:01.623987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.399 [2024-11-19 16:42:01.624003] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.399 [2024-11-19 16:42:01.624228] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.399 [2024-11-19 16:42:01.624474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.399 [2024-11-19 16:42:01.624494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.399 [2024-11-19 16:42:01.624507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.399 [2024-11-19 16:42:01.624520] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.399 [2024-11-19 16:42:01.637016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.399 [2024-11-19 16:42:01.637377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-11-19 16:42:01.637405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.399 [2024-11-19 16:42:01.637421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.399 [2024-11-19 16:42:01.637652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.399 [2024-11-19 16:42:01.637874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.399 [2024-11-19 16:42:01.637894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.399 [2024-11-19 16:42:01.637907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.399 [2024-11-19 16:42:01.637919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.399 [2024-11-19 16:42:01.650429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.399 [2024-11-19 16:42:01.650823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-11-19 16:42:01.650850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.399 [2024-11-19 16:42:01.650866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.399 [2024-11-19 16:42:01.651099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.399 [2024-11-19 16:42:01.651334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.399 [2024-11-19 16:42:01.651356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.399 [2024-11-19 16:42:01.651370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.399 [2024-11-19 16:42:01.651383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.399 [2024-11-19 16:42:01.663897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.399 [2024-11-19 16:42:01.664289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-11-19 16:42:01.664317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.399 [2024-11-19 16:42:01.664333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.399 [2024-11-19 16:42:01.664563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.399 [2024-11-19 16:42:01.664785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.399 [2024-11-19 16:42:01.664805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.399 [2024-11-19 16:42:01.664818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.399 [2024-11-19 16:42:01.664830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.399 [2024-11-19 16:42:01.677215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.399 [2024-11-19 16:42:01.677613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-11-19 16:42:01.677656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.399 [2024-11-19 16:42:01.677672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.399 [2024-11-19 16:42:01.677940] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.399 [2024-11-19 16:42:01.678168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.399 [2024-11-19 16:42:01.678189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.399 [2024-11-19 16:42:01.678202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.399 [2024-11-19 16:42:01.678214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.399 [2024-11-19 16:42:01.690494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.399 [2024-11-19 16:42:01.690920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-11-19 16:42:01.690948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.399 [2024-11-19 16:42:01.690964] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.399 [2024-11-19 16:42:01.691189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.399 [2024-11-19 16:42:01.691423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.399 [2024-11-19 16:42:01.691444] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.399 [2024-11-19 16:42:01.691456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.399 [2024-11-19 16:42:01.691469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.399 [2024-11-19 16:42:01.703805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.399 [2024-11-19 16:42:01.704159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-11-19 16:42:01.704188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.399 [2024-11-19 16:42:01.704204] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.399 [2024-11-19 16:42:01.704449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.399 [2024-11-19 16:42:01.704683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.399 [2024-11-19 16:42:01.704703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.399 [2024-11-19 16:42:01.704716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.399 [2024-11-19 16:42:01.704728] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.399 [2024-11-19 16:42:01.717340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.399 [2024-11-19 16:42:01.717718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.399 [2024-11-19 16:42:01.717747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.399 [2024-11-19 16:42:01.717763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.399 [2024-11-19 16:42:01.717993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.399 [2024-11-19 16:42:01.718251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.399 [2024-11-19 16:42:01.718273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.399 [2024-11-19 16:42:01.718287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.399 [2024-11-19 16:42:01.718300] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.400 [2024-11-19 16:42:01.731005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.667 [2024-11-19 16:42:01.731365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.667 [2024-11-19 16:42:01.731395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.667 [2024-11-19 16:42:01.731412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.667 [2024-11-19 16:42:01.731627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.667 [2024-11-19 16:42:01.731857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.667 [2024-11-19 16:42:01.731877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.667 [2024-11-19 16:42:01.731891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.667 [2024-11-19 16:42:01.731903] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.667 [2024-11-19 16:42:01.744429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.667 [2024-11-19 16:42:01.744774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.744800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.744816] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.745016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.745260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.745292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.745308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.745321] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.757900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.758286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.758314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.758331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.758568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.758802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.758822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.758835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.758847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.771300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.771694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.771735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.771751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.772005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.772253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.772275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.772289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.772301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.784862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.785209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.785237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.785253] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.785475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.785681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.785701] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.785714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.785731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.798234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.798599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.798628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.798644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.798882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.799115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.799136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.799166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.799178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.811588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.811971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.811999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.812016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.812242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.812488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.812509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.812521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.812534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.825173] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.825518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.825559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.825575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.825812] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.826018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.826038] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.826051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.826063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.838502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.838895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.838927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.838945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.839172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.839393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.839414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.839428] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.839441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.852080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.852490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.852518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.852534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.852765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.852989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.853010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.853023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.853035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.865430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.865840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.865883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.865899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.866139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.866375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.866396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.866425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.866438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.878739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.879150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.879180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.879196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.879432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.879655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.668 [2024-11-19 16:42:01.879675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.668 [2024-11-19 16:42:01.879688] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.668 [2024-11-19 16:42:01.879700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.668 [2024-11-19 16:42:01.892170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.668 [2024-11-19 16:42:01.892535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.668 [2024-11-19 16:42:01.892563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.668 [2024-11-19 16:42:01.892578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.668 [2024-11-19 16:42:01.892801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.668 [2024-11-19 16:42:01.893008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.669 [2024-11-19 16:42:01.893028] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.669 [2024-11-19 16:42:01.893041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.669 [2024-11-19 16:42:01.893053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.669 [2024-11-19 16:42:01.905607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.669 [2024-11-19 16:42:01.905910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.669 [2024-11-19 16:42:01.905936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.669 [2024-11-19 16:42:01.905951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.669 [2024-11-19 16:42:01.906201] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.669 [2024-11-19 16:42:01.906450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.669 [2024-11-19 16:42:01.906471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.669 [2024-11-19 16:42:01.906483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.669 [2024-11-19 16:42:01.906495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.669 [2024-11-19 16:42:01.918952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.669 [2024-11-19 16:42:01.919330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.669 [2024-11-19 16:42:01.919358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.669 [2024-11-19 16:42:01.919374] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.669 [2024-11-19 16:42:01.919615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.669 [2024-11-19 16:42:01.919821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.669 [2024-11-19 16:42:01.919847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.669 [2024-11-19 16:42:01.919861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.669 [2024-11-19 16:42:01.919873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.669 [2024-11-19 16:42:01.932421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.669 [2024-11-19 16:42:01.932820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.669 [2024-11-19 16:42:01.932862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.669 [2024-11-19 16:42:01.932878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.669 [2024-11-19 16:42:01.933119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.669 [2024-11-19 16:42:01.933338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.669 [2024-11-19 16:42:01.933374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.669 [2024-11-19 16:42:01.933388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.669 [2024-11-19 16:42:01.933401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.669 [2024-11-19 16:42:01.945726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.669 [2024-11-19 16:42:01.946082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.669 [2024-11-19 16:42:01.946110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.669 [2024-11-19 16:42:01.946126] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.669 [2024-11-19 16:42:01.946346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.669 [2024-11-19 16:42:01.946569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.669 [2024-11-19 16:42:01.946590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.669 [2024-11-19 16:42:01.946602] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.669 [2024-11-19 16:42:01.946615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.669 [2024-11-19 16:42:01.959245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.669 [2024-11-19 16:42:01.959653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.669 [2024-11-19 16:42:01.959681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.669 [2024-11-19 16:42:01.959697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.669 [2024-11-19 16:42:01.959927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.669 [2024-11-19 16:42:01.960182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.669 [2024-11-19 16:42:01.960204] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.669 [2024-11-19 16:42:01.960218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.669 [2024-11-19 16:42:01.960230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.669 [2024-11-19 16:42:01.972584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.669 [2024-11-19 16:42:01.972965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.669 [2024-11-19 16:42:01.973007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.669 [2024-11-19 16:42:01.973023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.669 [2024-11-19 16:42:01.973264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.669 [2024-11-19 16:42:01.973513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.669 [2024-11-19 16:42:01.973534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.669 [2024-11-19 16:42:01.973547] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.669 [2024-11-19 16:42:01.973559] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.669 [2024-11-19 16:42:01.986035] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.669 [2024-11-19 16:42:01.986406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.669 [2024-11-19 16:42:01.986434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.669 [2024-11-19 16:42:01.986451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.669 [2024-11-19 16:42:01.986681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.669 [2024-11-19 16:42:01.986903] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.669 [2024-11-19 16:42:01.986923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.669 [2024-11-19 16:42:01.986936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.669 [2024-11-19 16:42:01.986948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.932 [2024-11-19 16:42:01.999569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.932 [2024-11-19 16:42:01.999966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.932 [2024-11-19 16:42:01.999994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.932 [2024-11-19 16:42:02.000010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.932 [2024-11-19 16:42:02.000237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.932 [2024-11-19 16:42:02.000484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.932 [2024-11-19 16:42:02.000504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.932 [2024-11-19 16:42:02.000518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.932 [2024-11-19 16:42:02.000530] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.932 [2024-11-19 16:42:02.012997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.932 [2024-11-19 16:42:02.013380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.932 [2024-11-19 16:42:02.013415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.932 [2024-11-19 16:42:02.013432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.932 [2024-11-19 16:42:02.013662] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.932 [2024-11-19 16:42:02.013884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.932 [2024-11-19 16:42:02.013904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.932 [2024-11-19 16:42:02.013917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.932 [2024-11-19 16:42:02.013930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.932 [2024-11-19 16:42:02.026346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.932 [2024-11-19 16:42:02.026694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.932 [2024-11-19 16:42:02.026721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.932 [2024-11-19 16:42:02.026738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.932 [2024-11-19 16:42:02.026946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.932 [2024-11-19 16:42:02.027216] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.932 [2024-11-19 16:42:02.027238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.932 [2024-11-19 16:42:02.027252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.932 [2024-11-19 16:42:02.027264] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.932 [2024-11-19 16:42:02.039810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.932 [2024-11-19 16:42:02.040213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.932 [2024-11-19 16:42:02.040241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.932 [2024-11-19 16:42:02.040257] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.932 [2024-11-19 16:42:02.040486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.932 [2024-11-19 16:42:02.040708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.932 [2024-11-19 16:42:02.040729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.933 [2024-11-19 16:42:02.040742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.933 [2024-11-19 16:42:02.040754] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.933 [2024-11-19 16:42:02.053241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.933 [2024-11-19 16:42:02.053639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.933 [2024-11-19 16:42:02.053667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.933 [2024-11-19 16:42:02.053684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.933 [2024-11-19 16:42:02.053919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.933 [2024-11-19 16:42:02.054186] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.933 [2024-11-19 16:42:02.054209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.933 [2024-11-19 16:42:02.054223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.933 [2024-11-19 16:42:02.054236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.933 [2024-11-19 16:42:02.066684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.933 [2024-11-19 16:42:02.067033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.933 [2024-11-19 16:42:02.067061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.933 [2024-11-19 16:42:02.067088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.933 [2024-11-19 16:42:02.067305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.933 [2024-11-19 16:42:02.067545] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.933 [2024-11-19 16:42:02.067565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.933 [2024-11-19 16:42:02.067578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.933 [2024-11-19 16:42:02.067590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.933 [2024-11-19 16:42:02.080064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.933 [2024-11-19 16:42:02.080541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.933 [2024-11-19 16:42:02.080584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.933 [2024-11-19 16:42:02.080600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.933 [2024-11-19 16:42:02.080856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.933 [2024-11-19 16:42:02.081090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.933 [2024-11-19 16:42:02.081111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.933 [2024-11-19 16:42:02.081125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.933 [2024-11-19 16:42:02.081139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.933 [2024-11-19 16:42:02.093453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.933 [2024-11-19 16:42:02.093825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.933 [2024-11-19 16:42:02.093854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.933 [2024-11-19 16:42:02.093870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.933 [2024-11-19 16:42:02.094096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.933 [2024-11-19 16:42:02.094316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.933 [2024-11-19 16:42:02.094337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.933 [2024-11-19 16:42:02.094356] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.933 [2024-11-19 16:42:02.094370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.933 [2024-11-19 16:42:02.106924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.933 [2024-11-19 16:42:02.107316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.933 [2024-11-19 16:42:02.107345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.933 [2024-11-19 16:42:02.107361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.933 [2024-11-19 16:42:02.107591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.933 [2024-11-19 16:42:02.107813] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.933 [2024-11-19 16:42:02.107833] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.933 [2024-11-19 16:42:02.107846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.933 [2024-11-19 16:42:02.107858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.933 [2024-11-19 16:42:02.120481] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.933 [2024-11-19 16:42:02.120867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.933 [2024-11-19 16:42:02.120895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.933 [2024-11-19 16:42:02.120911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.933 [2024-11-19 16:42:02.121137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.933 [2024-11-19 16:42:02.121372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.933 [2024-11-19 16:42:02.121393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.933 [2024-11-19 16:42:02.121407] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.933 [2024-11-19 16:42:02.121434] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.933 [2024-11-19 16:42:02.133935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.933 [2024-11-19 16:42:02.134309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.933 [2024-11-19 16:42:02.134347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.933 [2024-11-19 16:42:02.134363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.933 [2024-11-19 16:42:02.134592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.933 [2024-11-19 16:42:02.134815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.933 [2024-11-19 16:42:02.134835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.933 [2024-11-19 16:42:02.134848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.933 [2024-11-19 16:42:02.134861] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.933 [2024-11-19 16:42:02.147453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.933 [2024-11-19 16:42:02.147839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.933 [2024-11-19 16:42:02.147881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.933 [2024-11-19 16:42:02.147896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.933 [2024-11-19 16:42:02.148149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.933 [2024-11-19 16:42:02.148362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.933 [2024-11-19 16:42:02.148397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.933 [2024-11-19 16:42:02.148411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.933 [2024-11-19 16:42:02.148423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.933 [2024-11-19 16:42:02.160876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.933 [2024-11-19 16:42:02.161239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.933 [2024-11-19 16:42:02.161268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.933 [2024-11-19 16:42:02.161284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.933 [2024-11-19 16:42:02.161499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.933 [2024-11-19 16:42:02.161727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.933 [2024-11-19 16:42:02.161748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.933 [2024-11-19 16:42:02.161761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.933 [2024-11-19 16:42:02.161773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.933 [2024-11-19 16:42:02.174402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.933 [2024-11-19 16:42:02.174799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.933 [2024-11-19 16:42:02.174827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.933 [2024-11-19 16:42:02.174843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.933 [2024-11-19 16:42:02.175084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.933 [2024-11-19 16:42:02.175320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.934 [2024-11-19 16:42:02.175342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.934 [2024-11-19 16:42:02.175355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.934 [2024-11-19 16:42:02.175368] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.934 [2024-11-19 16:42:02.187861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.934 4452.80 IOPS, 17.39 MiB/s [2024-11-19T15:42:02.273Z] [2024-11-19 16:42:02.189836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.934 [2024-11-19 16:42:02.189869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.934 [2024-11-19 16:42:02.189886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.934 [2024-11-19 16:42:02.190113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.934 [2024-11-19 16:42:02.190333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.934 [2024-11-19 16:42:02.190370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.934 [2024-11-19 16:42:02.190384] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.934 [2024-11-19 16:42:02.190396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.934 [2024-11-19 16:42:02.201266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.934 [2024-11-19 16:42:02.201628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.934 [2024-11-19 16:42:02.201655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.934 [2024-11-19 16:42:02.201671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.934 [2024-11-19 16:42:02.201893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.934 [2024-11-19 16:42:02.202131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.934 [2024-11-19 16:42:02.202153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.934 [2024-11-19 16:42:02.202167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.934 [2024-11-19 16:42:02.202180] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.934 [2024-11-19 16:42:02.214602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.934 [2024-11-19 16:42:02.214981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.934 [2024-11-19 16:42:02.215009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.934 [2024-11-19 16:42:02.215026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.934 [2024-11-19 16:42:02.215251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.934 [2024-11-19 16:42:02.215481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.934 [2024-11-19 16:42:02.215501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.934 [2024-11-19 16:42:02.215514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.934 [2024-11-19 16:42:02.215526] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.934 [2024-11-19 16:42:02.228065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.934 [2024-11-19 16:42:02.228500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.934 [2024-11-19 16:42:02.228528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.934 [2024-11-19 16:42:02.228544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.934 [2024-11-19 16:42:02.228782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.934 [2024-11-19 16:42:02.229006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.934 [2024-11-19 16:42:02.229026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.934 [2024-11-19 16:42:02.229039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.934 [2024-11-19 16:42:02.229066] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.934 [2024-11-19 16:42:02.241693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.934 [2024-11-19 16:42:02.242084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.934 [2024-11-19 16:42:02.242113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.934 [2024-11-19 16:42:02.242130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.934 [2024-11-19 16:42:02.242346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.934 [2024-11-19 16:42:02.242568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.934 [2024-11-19 16:42:02.242588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.934 [2024-11-19 16:42:02.242601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.934 [2024-11-19 16:42:02.242613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:11.934 [2024-11-19 16:42:02.255252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:11.934 [2024-11-19 16:42:02.255614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:11.934 [2024-11-19 16:42:02.255643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:11.934 [2024-11-19 16:42:02.255658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:11.934 [2024-11-19 16:42:02.255882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:11.934 [2024-11-19 16:42:02.256119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:11.934 [2024-11-19 16:42:02.256141] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:11.934 [2024-11-19 16:42:02.256156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:11.934 [2024-11-19 16:42:02.256168] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.196 [2024-11-19 16:42:02.269014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.196 [2024-11-19 16:42:02.269364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.196 [2024-11-19 16:42:02.269393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.196 [2024-11-19 16:42:02.269409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.196 [2024-11-19 16:42:02.269639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.196 [2024-11-19 16:42:02.269851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.196 [2024-11-19 16:42:02.269872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.196 [2024-11-19 16:42:02.269891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.196 [2024-11-19 16:42:02.269904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.196 [2024-11-19 16:42:02.282491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.196 [2024-11-19 16:42:02.282876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.196 [2024-11-19 16:42:02.282904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.196 [2024-11-19 16:42:02.282921] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.196 [2024-11-19 16:42:02.283147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.196 [2024-11-19 16:42:02.283382] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.196 [2024-11-19 16:42:02.283402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.196 [2024-11-19 16:42:02.283415] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.196 [2024-11-19 16:42:02.283427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.196 [2024-11-19 16:42:02.295853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.196 [2024-11-19 16:42:02.296185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.196 [2024-11-19 16:42:02.296213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.196 [2024-11-19 16:42:02.296230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.196 [2024-11-19 16:42:02.296462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.196 [2024-11-19 16:42:02.296685] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.196 [2024-11-19 16:42:02.296705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.196 [2024-11-19 16:42:02.296719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.196 [2024-11-19 16:42:02.296731] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.196 [2024-11-19 16:42:02.309412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.196 [2024-11-19 16:42:02.309795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.197 [2024-11-19 16:42:02.309824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.197 [2024-11-19 16:42:02.309840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.197 [2024-11-19 16:42:02.310081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.197 [2024-11-19 16:42:02.310315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.197 [2024-11-19 16:42:02.310337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.197 [2024-11-19 16:42:02.310351] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.197 [2024-11-19 16:42:02.310363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.197 [2024-11-19 16:42:02.322770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.197 [2024-11-19 16:42:02.323089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.197 [2024-11-19 16:42:02.323117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.197 [2024-11-19 16:42:02.323148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.197 [2024-11-19 16:42:02.323378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.197 [2024-11-19 16:42:02.323602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.197 [2024-11-19 16:42:02.323622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.197 [2024-11-19 16:42:02.323636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.197 [2024-11-19 16:42:02.323648] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.197 [2024-11-19 16:42:02.336128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.197 [2024-11-19 16:42:02.336482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.197 [2024-11-19 16:42:02.336525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.197 [2024-11-19 16:42:02.336542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.197 [2024-11-19 16:42:02.336771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.197 [2024-11-19 16:42:02.336993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.197 [2024-11-19 16:42:02.337014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.197 [2024-11-19 16:42:02.337027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.197 [2024-11-19 16:42:02.337039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.197 [2024-11-19 16:42:02.349591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.197 [2024-11-19 16:42:02.349987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.197 [2024-11-19 16:42:02.350015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.197 [2024-11-19 16:42:02.350032] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.197 [2024-11-19 16:42:02.350256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.197 [2024-11-19 16:42:02.350476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.197 [2024-11-19 16:42:02.350497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.197 [2024-11-19 16:42:02.350511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.197 [2024-11-19 16:42:02.350524] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.197 [2024-11-19 16:42:02.363195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.197 [2024-11-19 16:42:02.363617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.197 [2024-11-19 16:42:02.363659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.197 [2024-11-19 16:42:02.363680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.197 [2024-11-19 16:42:02.363936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.197 [2024-11-19 16:42:02.364178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.197 [2024-11-19 16:42:02.364200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.197 [2024-11-19 16:42:02.364214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.197 [2024-11-19 16:42:02.364227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.197 [2024-11-19 16:42:02.376660] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.197 [2024-11-19 16:42:02.376995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.197 [2024-11-19 16:42:02.377023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.197 [2024-11-19 16:42:02.377039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.197 [2024-11-19 16:42:02.377292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.197 [2024-11-19 16:42:02.377518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.197 [2024-11-19 16:42:02.377538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.197 [2024-11-19 16:42:02.377551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.197 [2024-11-19 16:42:02.377563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.197 [2024-11-19 16:42:02.389954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.197 [2024-11-19 16:42:02.390344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.197 [2024-11-19 16:42:02.390372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.197 [2024-11-19 16:42:02.390388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.197 [2024-11-19 16:42:02.390619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.197 [2024-11-19 16:42:02.390840] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.197 [2024-11-19 16:42:02.390861] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.197 [2024-11-19 16:42:02.390874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.197 [2024-11-19 16:42:02.390886] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.197 [2024-11-19 16:42:02.403394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.197 [2024-11-19 16:42:02.403780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.197 [2024-11-19 16:42:02.403822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.197 [2024-11-19 16:42:02.403839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.197 [2024-11-19 16:42:02.404106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.197 [2024-11-19 16:42:02.404351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.197 [2024-11-19 16:42:02.404372] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.197 [2024-11-19 16:42:02.404386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.197 [2024-11-19 16:42:02.404399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.197 [2024-11-19 16:42:02.416764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.197 [2024-11-19 16:42:02.417116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.197 [2024-11-19 16:42:02.417145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.197 [2024-11-19 16:42:02.417162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.197 [2024-11-19 16:42:02.417377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.197 [2024-11-19 16:42:02.417600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.197 [2024-11-19 16:42:02.417620] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.197 [2024-11-19 16:42:02.417633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.197 [2024-11-19 16:42:02.417645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.197 [2024-11-19 16:42:02.430305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.197 [2024-11-19 16:42:02.430714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.197 [2024-11-19 16:42:02.430743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.197 [2024-11-19 16:42:02.430759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.197 [2024-11-19 16:42:02.430989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.197 [2024-11-19 16:42:02.431247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.197 [2024-11-19 16:42:02.431270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.197 [2024-11-19 16:42:02.431283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.197 [2024-11-19 16:42:02.431296] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.198 [2024-11-19 16:42:02.443682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.198 [2024-11-19 16:42:02.444066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.198 [2024-11-19 16:42:02.444100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.198 [2024-11-19 16:42:02.444116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.198 [2024-11-19 16:42:02.444346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.198 [2024-11-19 16:42:02.444570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.198 [2024-11-19 16:42:02.444590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.198 [2024-11-19 16:42:02.444609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.198 [2024-11-19 16:42:02.444621] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.198 [2024-11-19 16:42:02.457156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.198 [2024-11-19 16:42:02.457559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.198 [2024-11-19 16:42:02.457587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.198 [2024-11-19 16:42:02.457604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.198 [2024-11-19 16:42:02.457847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.198 [2024-11-19 16:42:02.458080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.198 [2024-11-19 16:42:02.458101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.198 [2024-11-19 16:42:02.458115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.198 [2024-11-19 16:42:02.458128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.198 [2024-11-19 16:42:02.470515] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.198 [2024-11-19 16:42:02.470865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.198 [2024-11-19 16:42:02.470891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.198 [2024-11-19 16:42:02.470907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.198 [2024-11-19 16:42:02.471146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.198 [2024-11-19 16:42:02.471365] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.198 [2024-11-19 16:42:02.471404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.198 [2024-11-19 16:42:02.471417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.198 [2024-11-19 16:42:02.471430] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.198 [2024-11-19 16:42:02.483910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.198 [2024-11-19 16:42:02.484317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.198 [2024-11-19 16:42:02.484345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.198 [2024-11-19 16:42:02.484362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.198 [2024-11-19 16:42:02.484592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.198 [2024-11-19 16:42:02.484815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.198 [2024-11-19 16:42:02.484835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.198 [2024-11-19 16:42:02.484848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.198 [2024-11-19 16:42:02.484860] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.198 [2024-11-19 16:42:02.497450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.198 [2024-11-19 16:42:02.497833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.198 [2024-11-19 16:42:02.497861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.198 [2024-11-19 16:42:02.497877] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.198 [2024-11-19 16:42:02.498132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.198 [2024-11-19 16:42:02.498351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.198 [2024-11-19 16:42:02.498373] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.198 [2024-11-19 16:42:02.498387] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.198 [2024-11-19 16:42:02.498400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.198 [2024-11-19 16:42:02.510962] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.198 [2024-11-19 16:42:02.511308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.198 [2024-11-19 16:42:02.511336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.198 [2024-11-19 16:42:02.511352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.198 [2024-11-19 16:42:02.511581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.198 [2024-11-19 16:42:02.511803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.198 [2024-11-19 16:42:02.511823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.198 [2024-11-19 16:42:02.511836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.198 [2024-11-19 16:42:02.511848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.198 [2024-11-19 16:42:02.524298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.198 [2024-11-19 16:42:02.524667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.198 [2024-11-19 16:42:02.524694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.198 [2024-11-19 16:42:02.524710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.198 [2024-11-19 16:42:02.524919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.198 [2024-11-19 16:42:02.525190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.198 [2024-11-19 16:42:02.525212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.198 [2024-11-19 16:42:02.525226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.198 [2024-11-19 16:42:02.525239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.460 [2024-11-19 16:42:02.537889] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.460 [2024-11-19 16:42:02.538273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-11-19 16:42:02.538301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.460 [2024-11-19 16:42:02.538323] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.460 [2024-11-19 16:42:02.538558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.460 [2024-11-19 16:42:02.538780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.460 [2024-11-19 16:42:02.538800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.460 [2024-11-19 16:42:02.538813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.460 [2024-11-19 16:42:02.538825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.460 [2024-11-19 16:42:02.551190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.460 [2024-11-19 16:42:02.551510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-11-19 16:42:02.551537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.460 [2024-11-19 16:42:02.551552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.460 [2024-11-19 16:42:02.551753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.460 [2024-11-19 16:42:02.551959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.460 [2024-11-19 16:42:02.551979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.460 [2024-11-19 16:42:02.551991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.460 [2024-11-19 16:42:02.552004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.460 [2024-11-19 16:42:02.564487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.460 [2024-11-19 16:42:02.564844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-11-19 16:42:02.564872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.460 [2024-11-19 16:42:02.564888] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.460 [2024-11-19 16:42:02.565130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.460 [2024-11-19 16:42:02.565381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.460 [2024-11-19 16:42:02.565402] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.460 [2024-11-19 16:42:02.565416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.460 [2024-11-19 16:42:02.565428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.460 [2024-11-19 16:42:02.577941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.460 [2024-11-19 16:42:02.578304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-11-19 16:42:02.578332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.460 [2024-11-19 16:42:02.578348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.460 [2024-11-19 16:42:02.578578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.460 [2024-11-19 16:42:02.578807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.460 [2024-11-19 16:42:02.578827] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.460 [2024-11-19 16:42:02.578841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.460 [2024-11-19 16:42:02.578853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.460 [2024-11-19 16:42:02.591365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.460 [2024-11-19 16:42:02.591732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-11-19 16:42:02.591760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.460 [2024-11-19 16:42:02.591776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.460 [2024-11-19 16:42:02.592006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.460 [2024-11-19 16:42:02.592262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.460 [2024-11-19 16:42:02.592284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.460 [2024-11-19 16:42:02.592298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.460 [2024-11-19 16:42:02.592311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.460 [2024-11-19 16:42:02.604689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.460 [2024-11-19 16:42:02.605035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-11-19 16:42:02.605063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.460 [2024-11-19 16:42:02.605090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.460 [2024-11-19 16:42:02.605306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.460 [2024-11-19 16:42:02.605525] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.460 [2024-11-19 16:42:02.605546] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.460 [2024-11-19 16:42:02.605560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.460 [2024-11-19 16:42:02.605572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.460 [2024-11-19 16:42:02.618272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.460 [2024-11-19 16:42:02.618647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-11-19 16:42:02.618675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.460 [2024-11-19 16:42:02.618691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.460 [2024-11-19 16:42:02.618921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.460 [2024-11-19 16:42:02.619178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.460 [2024-11-19 16:42:02.619200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.460 [2024-11-19 16:42:02.619219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.460 [2024-11-19 16:42:02.619232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.460 [2024-11-19 16:42:02.631686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.460 [2024-11-19 16:42:02.632036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.460 [2024-11-19 16:42:02.632064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.460 [2024-11-19 16:42:02.632090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.460 [2024-11-19 16:42:02.632305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.461 [2024-11-19 16:42:02.632550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.461 [2024-11-19 16:42:02.632570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.461 [2024-11-19 16:42:02.632583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.461 [2024-11-19 16:42:02.632595] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.461 [2024-11-19 16:42:02.645148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.461 [2024-11-19 16:42:02.645553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-11-19 16:42:02.645580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.461 [2024-11-19 16:42:02.645596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.461 [2024-11-19 16:42:02.645826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.461 [2024-11-19 16:42:02.646063] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.461 [2024-11-19 16:42:02.646094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.461 [2024-11-19 16:42:02.646109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.461 [2024-11-19 16:42:02.646122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.461 [2024-11-19 16:42:02.658524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.461 [2024-11-19 16:42:02.658864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-11-19 16:42:02.658892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.461 [2024-11-19 16:42:02.658909] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.461 [2024-11-19 16:42:02.659134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.461 [2024-11-19 16:42:02.659354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.461 [2024-11-19 16:42:02.659390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.461 [2024-11-19 16:42:02.659403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.461 [2024-11-19 16:42:02.659415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.461 [2024-11-19 16:42:02.672020] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.461 [2024-11-19 16:42:02.672396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-11-19 16:42:02.672424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.461 [2024-11-19 16:42:02.672441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.461 [2024-11-19 16:42:02.672673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.461 [2024-11-19 16:42:02.672895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.461 [2024-11-19 16:42:02.672915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.461 [2024-11-19 16:42:02.672928] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.461 [2024-11-19 16:42:02.672940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.461 [2024-11-19 16:42:02.685491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.461 [2024-11-19 16:42:02.685821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-11-19 16:42:02.685849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.461 [2024-11-19 16:42:02.685865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.461 [2024-11-19 16:42:02.686102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.461 [2024-11-19 16:42:02.686322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.461 [2024-11-19 16:42:02.686344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.461 [2024-11-19 16:42:02.686359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.461 [2024-11-19 16:42:02.686386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.461 [2024-11-19 16:42:02.698849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.461 [2024-11-19 16:42:02.699219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-11-19 16:42:02.699247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.461 [2024-11-19 16:42:02.699263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.461 [2024-11-19 16:42:02.699478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.461 [2024-11-19 16:42:02.699705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.461 [2024-11-19 16:42:02.699726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.461 [2024-11-19 16:42:02.699740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.461 [2024-11-19 16:42:02.699752] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.461 [2024-11-19 16:42:02.712258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.461 [2024-11-19 16:42:02.712618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-11-19 16:42:02.712646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.461 [2024-11-19 16:42:02.712667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.461 [2024-11-19 16:42:02.712897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.461 [2024-11-19 16:42:02.713148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.461 [2024-11-19 16:42:02.713170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.461 [2024-11-19 16:42:02.713184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.461 [2024-11-19 16:42:02.713197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.461 [2024-11-19 16:42:02.725693] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.461 [2024-11-19 16:42:02.726038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-11-19 16:42:02.726066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.461 [2024-11-19 16:42:02.726093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.461 [2024-11-19 16:42:02.726308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.461 [2024-11-19 16:42:02.726552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.461 [2024-11-19 16:42:02.726572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.461 [2024-11-19 16:42:02.726585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.461 [2024-11-19 16:42:02.726597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.461 [2024-11-19 16:42:02.739225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.461 [2024-11-19 16:42:02.739632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-11-19 16:42:02.739675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.461 [2024-11-19 16:42:02.739691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.461 [2024-11-19 16:42:02.739948] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.461 [2024-11-19 16:42:02.740205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.461 [2024-11-19 16:42:02.740233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.461 [2024-11-19 16:42:02.740247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.461 [2024-11-19 16:42:02.740259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.461 [2024-11-19 16:42:02.752596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.461 [2024-11-19 16:42:02.752983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.461 [2024-11-19 16:42:02.753012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.461 [2024-11-19 16:42:02.753028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.462 [2024-11-19 16:42:02.753253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.462 [2024-11-19 16:42:02.753499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.462 [2024-11-19 16:42:02.753520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.462 [2024-11-19 16:42:02.753533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.462 [2024-11-19 16:42:02.753545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.462 [2024-11-19 16:42:02.766109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.462 [2024-11-19 16:42:02.766521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-11-19 16:42:02.766549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.462 [2024-11-19 16:42:02.766565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.462 [2024-11-19 16:42:02.766780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.462 [2024-11-19 16:42:02.767008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.462 [2024-11-19 16:42:02.767029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.462 [2024-11-19 16:42:02.767043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.462 [2024-11-19 16:42:02.767079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.462 [2024-11-19 16:42:02.779398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.462 [2024-11-19 16:42:02.779799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-11-19 16:42:02.779842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.462 [2024-11-19 16:42:02.779858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.462 [2024-11-19 16:42:02.780111] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.462 [2024-11-19 16:42:02.780339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.462 [2024-11-19 16:42:02.780359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.462 [2024-11-19 16:42:02.780373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.462 [2024-11-19 16:42:02.780386] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.462 [2024-11-19 16:42:02.792978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.462 [2024-11-19 16:42:02.793350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.462 [2024-11-19 16:42:02.793378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.462 [2024-11-19 16:42:02.793395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.462 [2024-11-19 16:42:02.793610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 398681 Killed "${NVMF_APP[@]}" "$@" 00:36:12.723 [2024-11-19 16:42:02.793838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.723 [2024-11-19 16:42:02.793859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.723 [2024-11-19 16:42:02.793880] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.723 [2024-11-19 16:42:02.793895] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=399756 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 399756 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 399756 ']' 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:12.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:12.723 16:42:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.723 [2024-11-19 16:42:02.806489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.723 [2024-11-19 16:42:02.806838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.723 [2024-11-19 16:42:02.806867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.723 [2024-11-19 16:42:02.806883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.723 [2024-11-19 16:42:02.807108] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.723 [2024-11-19 16:42:02.807328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.723 [2024-11-19 16:42:02.807364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.723 [2024-11-19 16:42:02.807377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.723 [2024-11-19 16:42:02.807390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.723 [2024-11-19 16:42:02.820029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.723 [2024-11-19 16:42:02.820413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.723 [2024-11-19 16:42:02.820439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.723 [2024-11-19 16:42:02.820455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.723 [2024-11-19 16:42:02.820684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.723 [2024-11-19 16:42:02.820910] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.723 [2024-11-19 16:42:02.820946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.723 [2024-11-19 16:42:02.820965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.723 [2024-11-19 16:42:02.820978] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.723 [2024-11-19 16:42:02.833410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.723 [2024-11-19 16:42:02.833793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.723 [2024-11-19 16:42:02.833821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.723 [2024-11-19 16:42:02.833837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.723 [2024-11-19 16:42:02.834065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.723 [2024-11-19 16:42:02.834308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.723 [2024-11-19 16:42:02.834329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.723 [2024-11-19 16:42:02.834343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.723 [2024-11-19 16:42:02.834356] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.723 [2024-11-19 16:42:02.846830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.723 [2024-11-19 16:42:02.847235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.723 [2024-11-19 16:42:02.847264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.723 [2024-11-19 16:42:02.847281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.723 [2024-11-19 16:42:02.847511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.723 [2024-11-19 16:42:02.847732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.723 [2024-11-19 16:42:02.847752] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.723 [2024-11-19 16:42:02.847765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.723 [2024-11-19 16:42:02.847777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.723 [2024-11-19 16:42:02.850251] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:36:12.723 [2024-11-19 16:42:02.850329] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:12.723 [2024-11-19 16:42:02.860549] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.723 [2024-11-19 16:42:02.860929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.723 [2024-11-19 16:42:02.860958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.723 [2024-11-19 16:42:02.860974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.723 [2024-11-19 16:42:02.861198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.723 [2024-11-19 16:42:02.861418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.723 [2024-11-19 16:42:02.861445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.723 [2024-11-19 16:42:02.861459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.723 [2024-11-19 16:42:02.861472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.723 [2024-11-19 16:42:02.874111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.723 [2024-11-19 16:42:02.874502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.723 [2024-11-19 16:42:02.874529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.723 [2024-11-19 16:42:02.874546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.724 [2024-11-19 16:42:02.874777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.724 [2024-11-19 16:42:02.874999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.724 [2024-11-19 16:42:02.875019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.724 [2024-11-19 16:42:02.875032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.724 [2024-11-19 16:42:02.875044] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.724 [2024-11-19 16:42:02.887585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.724 [2024-11-19 16:42:02.887965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.724 [2024-11-19 16:42:02.887993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.724 [2024-11-19 16:42:02.888009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.724 [2024-11-19 16:42:02.888233] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.724 [2024-11-19 16:42:02.888466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.724 [2024-11-19 16:42:02.888488] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.724 [2024-11-19 16:42:02.888501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.724 [2024-11-19 16:42:02.888513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.724 [2024-11-19 16:42:02.901208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.724 [2024-11-19 16:42:02.901561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.724 [2024-11-19 16:42:02.901588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.724 [2024-11-19 16:42:02.901604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.724 [2024-11-19 16:42:02.901819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.724 [2024-11-19 16:42:02.902047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.724 [2024-11-19 16:42:02.902077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.724 [2024-11-19 16:42:02.902110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.724 [2024-11-19 16:42:02.902124] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.724 [2024-11-19 16:42:02.914778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.724 [2024-11-19 16:42:02.915110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.724 [2024-11-19 16:42:02.915139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.724 [2024-11-19 16:42:02.915155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.724 [2024-11-19 16:42:02.915385] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.724 [2024-11-19 16:42:02.915597] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.724 [2024-11-19 16:42:02.915618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.724 [2024-11-19 16:42:02.915631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.724 [2024-11-19 16:42:02.915644] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.724 [2024-11-19 16:42:02.926937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:12.724 [2024-11-19 16:42:02.928293] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.724 [2024-11-19 16:42:02.928671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.724 [2024-11-19 16:42:02.928699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.724 [2024-11-19 16:42:02.928715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.724 [2024-11-19 16:42:02.928951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.724 [2024-11-19 16:42:02.929196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.724 [2024-11-19 16:42:02.929219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.724 [2024-11-19 16:42:02.929233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.724 [2024-11-19 16:42:02.929245] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.724 [2024-11-19 16:42:02.941769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.724 [2024-11-19 16:42:02.942322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.724 [2024-11-19 16:42:02.942371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.724 [2024-11-19 16:42:02.942390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.724 [2024-11-19 16:42:02.942644] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.724 [2024-11-19 16:42:02.942853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.724 [2024-11-19 16:42:02.942874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.724 [2024-11-19 16:42:02.942889] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.724 [2024-11-19 16:42:02.942904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.724 [2024-11-19 16:42:02.955201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.724 [2024-11-19 16:42:02.955570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.724 [2024-11-19 16:42:02.955598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.724 [2024-11-19 16:42:02.955614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.724 [2024-11-19 16:42:02.955845] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.724 [2024-11-19 16:42:02.956094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.724 [2024-11-19 16:42:02.956131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.724 [2024-11-19 16:42:02.956147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.724 [2024-11-19 16:42:02.956160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.724 [2024-11-19 16:42:02.968659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.724 [2024-11-19 16:42:02.969011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.724 [2024-11-19 16:42:02.969040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.724 [2024-11-19 16:42:02.969058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.724 [2024-11-19 16:42:02.969285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.724 [2024-11-19 16:42:02.969528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.724 [2024-11-19 16:42:02.969548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.724 [2024-11-19 16:42:02.969561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.724 [2024-11-19 16:42:02.969574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.724 [2024-11-19 16:42:02.972918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:12.724 [2024-11-19 16:42:02.972951] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:12.724 [2024-11-19 16:42:02.972980] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:12.724 [2024-11-19 16:42:02.972991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:12.724 [2024-11-19 16:42:02.973001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:12.725 [2024-11-19 16:42:02.974371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:12.725 [2024-11-19 16:42:02.974433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:12.725 [2024-11-19 16:42:02.974436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:12.725 [2024-11-19 16:42:02.982275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.725 [2024-11-19 16:42:02.982740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.725 [2024-11-19 16:42:02.982776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.725 [2024-11-19 16:42:02.982794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.725 [2024-11-19 16:42:02.983017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.725 [2024-11-19 16:42:02.983250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.725 [2024-11-19 16:42:02.983282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.725 [2024-11-19 16:42:02.983299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.725 [2024-11-19 16:42:02.983314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.725 [2024-11-19 16:42:02.995942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.725 [2024-11-19 16:42:02.996493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.725 [2024-11-19 16:42:02.996532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.725 [2024-11-19 16:42:02.996551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.725 [2024-11-19 16:42:02.996776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.725 [2024-11-19 16:42:02.997000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.725 [2024-11-19 16:42:02.997022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.725 [2024-11-19 16:42:02.997038] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.725 [2024-11-19 16:42:02.997053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.725 [2024-11-19 16:42:03.009557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.725 [2024-11-19 16:42:03.010059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.725 [2024-11-19 16:42:03.010109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.725 [2024-11-19 16:42:03.010129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.725 [2024-11-19 16:42:03.010354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.725 [2024-11-19 16:42:03.010577] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.725 [2024-11-19 16:42:03.010600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.725 [2024-11-19 16:42:03.010617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.725 [2024-11-19 16:42:03.010632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.725 [2024-11-19 16:42:03.023192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.725 [2024-11-19 16:42:03.023673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.725 [2024-11-19 16:42:03.023710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.725 [2024-11-19 16:42:03.023730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.725 [2024-11-19 16:42:03.023954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.725 [2024-11-19 16:42:03.024188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.725 [2024-11-19 16:42:03.024210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.725 [2024-11-19 16:42:03.024228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.725 [2024-11-19 16:42:03.024244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.725 [2024-11-19 16:42:03.036947] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.725 [2024-11-19 16:42:03.037461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.725 [2024-11-19 16:42:03.037499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.725 [2024-11-19 16:42:03.037518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.725 [2024-11-19 16:42:03.037740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.725 [2024-11-19 16:42:03.037964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.725 [2024-11-19 16:42:03.037986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.725 [2024-11-19 16:42:03.038002] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.725 [2024-11-19 16:42:03.038018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.725 [2024-11-19 16:42:03.050708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.725 [2024-11-19 16:42:03.051230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.725 [2024-11-19 16:42:03.051269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.725 [2024-11-19 16:42:03.051288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.725 [2024-11-19 16:42:03.051512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.725 [2024-11-19 16:42:03.051735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.725 [2024-11-19 16:42:03.051756] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.725 [2024-11-19 16:42:03.051772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.725 [2024-11-19 16:42:03.051788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.987 [2024-11-19 16:42:03.064361] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.987 [2024-11-19 16:42:03.064727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.987 [2024-11-19 16:42:03.064755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.987 [2024-11-19 16:42:03.064772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.987 [2024-11-19 16:42:03.064988] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.987 [2024-11-19 16:42:03.065217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.987 [2024-11-19 16:42:03.065238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.987 [2024-11-19 16:42:03.065253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.987 [2024-11-19 16:42:03.065265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.987 [2024-11-19 16:42:03.077985] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.987 [2024-11-19 16:42:03.078326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.987 [2024-11-19 16:42:03.078365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.987 [2024-11-19 16:42:03.078382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.987 [2024-11-19 16:42:03.078597] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.987 [2024-11-19 16:42:03.078816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.987 [2024-11-19 16:42:03.078838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.987 [2024-11-19 16:42:03.078852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.987 [2024-11-19 16:42:03.078866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.987 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:12.987 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:12.987 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:12.987 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.988 [2024-11-19 16:42:03.091677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.988 [2024-11-19 16:42:03.092037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-11-19 16:42:03.092065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-11-19 16:42:03.092091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.988 [2024-11-19 16:42:03.092310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.988 [2024-11-19 16:42:03.092530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.988 [2024-11-19 16:42:03.092551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.988 [2024-11-19 16:42:03.092566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.988 [2024-11-19 16:42:03.092579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.988 [2024-11-19 16:42:03.105218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.988 [2024-11-19 16:42:03.105579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-11-19 16:42:03.105607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-11-19 16:42:03.105624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.988 [2024-11-19 16:42:03.105839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.988 [2024-11-19 16:42:03.106058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.988 [2024-11-19 16:42:03.106087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.988 [2024-11-19 16:42:03.106102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.988 [2024-11-19 16:42:03.106116] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.988 [2024-11-19 16:42:03.117108] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:12.988 [2024-11-19 16:42:03.118737] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.988 [2024-11-19 16:42:03.119047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-11-19 16:42:03.119082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-11-19 16:42:03.119101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.988 [2024-11-19 16:42:03.119317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.988 [2024-11-19 16:42:03.119536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.988 [2024-11-19 16:42:03.119558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.988 [2024-11-19 16:42:03.119572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.988 [2024-11-19 16:42:03.119586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.988 [2024-11-19 16:42:03.132408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.988 [2024-11-19 16:42:03.132862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-11-19 16:42:03.132895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-11-19 16:42:03.132913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.988 [2024-11-19 16:42:03.133141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.988 [2024-11-19 16:42:03.133371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.988 [2024-11-19 16:42:03.133392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.988 [2024-11-19 16:42:03.133408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.988 [2024-11-19 16:42:03.133423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.988 [2024-11-19 16:42:03.146079] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.988 [2024-11-19 16:42:03.146440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-11-19 16:42:03.146469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-11-19 16:42:03.146486] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.988 [2024-11-19 16:42:03.146702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.988 [2024-11-19 16:42:03.146932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.988 [2024-11-19 16:42:03.146961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.988 [2024-11-19 16:42:03.146979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.988 [2024-11-19 16:42:03.147000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.988 [2024-11-19 16:42:03.159729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.988 [2024-11-19 16:42:03.160207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-11-19 16:42:03.160242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-11-19 16:42:03.160267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.988 [2024-11-19 16:42:03.160489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.988 [2024-11-19 16:42:03.160712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.988 [2024-11-19 16:42:03.160734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.988 [2024-11-19 16:42:03.160750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.988 [2024-11-19 16:42:03.160766] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.988 Malloc0 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.988 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.988 [2024-11-19 16:42:03.173413] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.988 [2024-11-19 16:42:03.173771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:12.988 [2024-11-19 16:42:03.173799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa90cf0 with addr=10.0.0.2, port=4420 00:36:12.988 [2024-11-19 16:42:03.173815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa90cf0 is same with the state(6) to be set 00:36:12.989 [2024-11-19 16:42:03.174031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa90cf0 (9): Bad file descriptor 00:36:12.989 [2024-11-19 16:42:03.174260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:12.989 [2024-11-19 16:42:03.174282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:12.989 [2024-11-19 16:42:03.174297] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:12.989 [2024-11-19 16:42:03.174310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:12.989 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.989 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:12.989 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:12.989 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:12.989 [2024-11-19 16:42:03.181524] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:12.989 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:12.989 16:42:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 398965 00:36:12.989 [2024-11-19 16:42:03.186997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:12.989 3710.67 IOPS, 14.49 MiB/s [2024-11-19T15:42:03.328Z] [2024-11-19 16:42:03.252576] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:14.862 4338.86 IOPS, 16.95 MiB/s [2024-11-19T15:42:06.579Z] 4877.62 IOPS, 19.05 MiB/s [2024-11-19T15:42:07.513Z] 5313.33 IOPS, 20.76 MiB/s [2024-11-19T15:42:08.449Z] 5665.30 IOPS, 22.13 MiB/s [2024-11-19T15:42:09.388Z] 5936.91 IOPS, 23.19 MiB/s [2024-11-19T15:42:10.331Z] 6153.08 IOPS, 24.04 MiB/s [2024-11-19T15:42:11.265Z] 6355.92 IOPS, 24.83 MiB/s [2024-11-19T15:42:12.646Z] 6509.29 IOPS, 25.43 MiB/s 00:36:22.307 Latency(us) 00:36:22.307 [2024-11-19T15:42:12.646Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.307 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:22.307 Verification LBA range: start 0x0 length 0x4000 00:36:22.307 Nvme1n1 : 15.00 6656.61 26.00 9775.25 0.00 7766.20 561.30 17767.54 00:36:22.307 [2024-11-19T15:42:12.646Z] =================================================================================================================== 00:36:22.307 [2024-11-19T15:42:12.646Z] Total : 6656.61 26.00 9775.25 0.00 7766.20 561.30 17767.54 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:22.307 rmmod nvme_tcp 00:36:22.307 rmmod nvme_fabrics 00:36:22.307 rmmod nvme_keyring 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 399756 ']' 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 399756 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 399756 ']' 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 399756 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 399756 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 399756' 00:36:22.307 killing process with pid 399756 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 399756 00:36:22.307 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 399756 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:22.566 16:42:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.475 16:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:24.475 00:36:24.475 real 0m22.347s 00:36:24.475 user 0m59.582s 00:36:24.475 sys 0m4.220s 00:36:24.475 16:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:24.475 16:42:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:24.475 ************************************ 00:36:24.475 END TEST nvmf_bdevperf 00:36:24.475 ************************************ 00:36:24.475 16:42:14 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:24.475 16:42:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:24.475 16:42:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.475 16:42:14 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:24.734 ************************************ 00:36:24.734 START TEST nvmf_target_disconnect 00:36:24.734 ************************************ 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:24.734 * Looking for test storage... 00:36:24.734 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:24.734 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:24.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.735 --rc genhtml_branch_coverage=1 00:36:24.735 --rc genhtml_function_coverage=1 00:36:24.735 --rc genhtml_legend=1 00:36:24.735 --rc geninfo_all_blocks=1 00:36:24.735 --rc geninfo_unexecuted_blocks=1 00:36:24.735 00:36:24.735 ' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:24.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.735 --rc genhtml_branch_coverage=1 00:36:24.735 --rc genhtml_function_coverage=1 00:36:24.735 --rc genhtml_legend=1 00:36:24.735 --rc geninfo_all_blocks=1 00:36:24.735 --rc geninfo_unexecuted_blocks=1 00:36:24.735 00:36:24.735 ' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:24.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.735 --rc genhtml_branch_coverage=1 00:36:24.735 --rc genhtml_function_coverage=1 00:36:24.735 --rc genhtml_legend=1 00:36:24.735 --rc geninfo_all_blocks=1 00:36:24.735 --rc geninfo_unexecuted_blocks=1 00:36:24.735 00:36:24.735 ' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:24.735 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:24.735 --rc genhtml_branch_coverage=1 00:36:24.735 --rc genhtml_function_coverage=1 00:36:24.735 --rc genhtml_legend=1 00:36:24.735 --rc geninfo_all_blocks=1 00:36:24.735 --rc geninfo_unexecuted_blocks=1 00:36:24.735 00:36:24.735 ' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:24.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:24.735 16:42:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:27.284 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:27.285 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:27.285 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:27.285 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:27.285 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:27.285 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:27.285 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:36:27.285 00:36:27.285 --- 10.0.0.2 ping statistics --- 00:36:27.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.285 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:27.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:27.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:36:27.285 00:36:27.285 --- 10.0.0.1 ping statistics --- 00:36:27.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.285 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:27.285 ************************************ 00:36:27.285 START TEST nvmf_target_disconnect_tc1 00:36:27.285 ************************************ 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:27.285 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:27.286 [2024-11-19 16:42:17.335009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.286 [2024-11-19 16:42:17.335099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20a0a90 with addr=10.0.0.2, port=4420 00:36:27.286 [2024-11-19 16:42:17.335131] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:27.286 [2024-11-19 16:42:17.335155] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:27.286 [2024-11-19 16:42:17.335168] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:27.286 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:27.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:27.286 Initializing NVMe Controllers 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:27.286 00:36:27.286 real 0m0.095s 00:36:27.286 user 0m0.046s 00:36:27.286 sys 0m0.049s 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:27.286 ************************************ 00:36:27.286 END TEST nvmf_target_disconnect_tc1 00:36:27.286 ************************************ 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:27.286 ************************************ 00:36:27.286 START TEST nvmf_target_disconnect_tc2 00:36:27.286 ************************************ 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=403424 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 403424 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 403424 ']' 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:27.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:27.286 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:27.286 [2024-11-19 16:42:17.445387] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:36:27.286 [2024-11-19 16:42:17.445477] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:27.286 [2024-11-19 16:42:17.516249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:27.286 [2024-11-19 16:42:17.563408] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:27.286 [2024-11-19 16:42:17.563461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:27.286 [2024-11-19 16:42:17.563489] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:27.286 [2024-11-19 16:42:17.563500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:27.286 [2024-11-19 16:42:17.563510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:27.286 [2024-11-19 16:42:17.564996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:27.286 [2024-11-19 16:42:17.565058] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:27.286 [2024-11-19 16:42:17.565191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:27.286 [2024-11-19 16:42:17.565195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:27.546 Malloc0 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:27.546 [2024-11-19 16:42:17.736673] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:27.546 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:27.547 [2024-11-19 16:42:17.764946] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=403449 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:27.547 16:42:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:29.452 16:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 403424 00:36:29.452 16:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Write completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Write completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Write completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Write completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Write completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Write completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Write completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Write completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Write completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Write completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 [2024-11-19 16:42:19.789547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.739 Read completed with error (sct=0, sc=8) 00:36:29.739 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 [2024-11-19 16:42:19.789922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 [2024-11-19 16:42:19.790254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Write completed with error (sct=0, sc=8) 00:36:29.740 starting I/O failed 00:36:29.740 Read completed with error (sct=0, sc=8) 00:36:29.741 starting I/O failed 00:36:29.741 Write completed with error (sct=0, sc=8) 00:36:29.741 starting I/O failed 00:36:29.741 Write completed with error (sct=0, sc=8) 00:36:29.741 starting I/O failed 00:36:29.741 [2024-11-19 16:42:19.790602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:29.741 [2024-11-19 16:42:19.790752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.790794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.790950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.790977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.791085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.791112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.791211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.791239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.791360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.791386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.791474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.791500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.791645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.791671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.791781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.791807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.791926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.791953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.792064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.792109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.792228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.792255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.792339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.792366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.792495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.792522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.792635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.792661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.792803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.792829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.792944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.792972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.793048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.793084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.793184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.793210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.793322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.793348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.793499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.793525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.793646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.793672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.793788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.793823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.793940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.793966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.794098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.794146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.794238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.794266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.794402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.794430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.794572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.794598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.794727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.794753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.794863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.794889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.795012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.795039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.795147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.795175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.741 [2024-11-19 16:42:19.795263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.741 [2024-11-19 16:42:19.795289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.741 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.795423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.795449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.795567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.795593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.795709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.795735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.795850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.795877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.795998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.796024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.796145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.796185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.796283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.796309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.796399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.796425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.796539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.796564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.796669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.796694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.796846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.796871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.796965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.796993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.797143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.797172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.797257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.797283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.797404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.797431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.797508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.797534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.797632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.797664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.797794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.797821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.797916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.797942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.798026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.798052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.798179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.798206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.798288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.798314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.798402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.798429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.798547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.798574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.798715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.798741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.798866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.798892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.799033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.799059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.799155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.799181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.799271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.799298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.799389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.799415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.799559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.799586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.799699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.799725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.742 qpair failed and we were unable to recover it. 00:36:29.742 [2024-11-19 16:42:19.799844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.742 [2024-11-19 16:42:19.799870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.799973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.800001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.800124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.800150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.800238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.800264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.800375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.800400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.800483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.800509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.800586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.800611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.800720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.800746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.800833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.800858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.800966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.800991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.801082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.801110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.801212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.801251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.801351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.801390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.801512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.801540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.801685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.801712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.801797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.801824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.801898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.801925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.802050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.802090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.802204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.802231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.802325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.802351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.802435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.802462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.802555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.802584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.802697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.802723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.803169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.803209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.803311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.803344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.803503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.803531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.803644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.803670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.803784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.803811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.803895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.803922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.743 [2024-11-19 16:42:19.804006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.743 [2024-11-19 16:42:19.804032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.743 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.804154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.804180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.804265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.804291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.804386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.804412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.804524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.804549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.804672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.804698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.804790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.804816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.804903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.804929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.805016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.805045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.805192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.805219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.805340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.805369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.805497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.805523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.805640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.805667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.805780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.805806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.805920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.805947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.806064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.806100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.806209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.806235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.806317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.806345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.806469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.806495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.806592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.806630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.806767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.806795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.806924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.806963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.807055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.807089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.807169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.807195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.807277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.807303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.807421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.807447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.807535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.807564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.807676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.807704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.807783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.807809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.807900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.807926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.808010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.808036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.808175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.808205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.744 [2024-11-19 16:42:19.808298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.744 [2024-11-19 16:42:19.808326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.744 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.808413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.808439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.808553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.808579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.808692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.808718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.808866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.808893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.808977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.809002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.809119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.809145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.809233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.809258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.809350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.809376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.809463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.809488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.809598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.809625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.809713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.809741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.809857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.809884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.809993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.810019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.810141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.810167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.810281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.810307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.810427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.810454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.810550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.810575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.810685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.810710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.810805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.810830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.810934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.810959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.811090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.811129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.811217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.811244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.811330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.811357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.811497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.811523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.811609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.811635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.811734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.811762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.811879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.745 [2024-11-19 16:42:19.811906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.745 qpair failed and we were unable to recover it. 00:36:29.745 [2024-11-19 16:42:19.811988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.812014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.812121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.812148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.812226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.812259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.812379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.812404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.812490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.812515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.812607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.812633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.812721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.812750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.812862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.812889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.812998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.813024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.813167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.813194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.813283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.813309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.813430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.813457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.813537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.813563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.813684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.813708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.813823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.813848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.813935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.813960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.814091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.814117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.814230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.814255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.814376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.814401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.814511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.814536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.814656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.814681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.814804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.814829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.814943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.814968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.815051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.815084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.815172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.815199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.815319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.815346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.815431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.815457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.815573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.815599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.815682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.815709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.815816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.815861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.815956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.815983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.816110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.816148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.746 qpair failed and we were unable to recover it. 00:36:29.746 [2024-11-19 16:42:19.816273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.746 [2024-11-19 16:42:19.816300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.816441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.816467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.816558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.816583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.816670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.816695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.816815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.816845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.816962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.816990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.817122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.817150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.817265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.817292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.817416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.817442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.817554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.817580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.817690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.817716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.817835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.817863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.817993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.818033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.818144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.818172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.818254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.818280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.818366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.818391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.818527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.818552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.818640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.818665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.818740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.818766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.818903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.818928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.819015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.819043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.819171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.819201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.819292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.819319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.819396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.819423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.819566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.819597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.819675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.819701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.819785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.819811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.819955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.819981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.820065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.820100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.820214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.820241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.820338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.820365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.747 [2024-11-19 16:42:19.820453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.747 [2024-11-19 16:42:19.820478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.747 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.820569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.820596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.820682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.820708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.820815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.820841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.820957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.820983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.821078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.821104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.821219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.821245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.821368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.821394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.821482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.821509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.821649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.821675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.821758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.821784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.821897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.821923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.822034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.822062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.822182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.822210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.822367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.822406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.822503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.822529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.822617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.822645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.822729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.822754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.822859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.822884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.822989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.823014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.823106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.823135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.823277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.823303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.823379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.823405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.823482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.823508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.823595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.823625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.823706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.823732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.823818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.823845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.823956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.823982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.824113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.824153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.824265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.748 [2024-11-19 16:42:19.824293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.748 qpair failed and we were unable to recover it. 00:36:29.748 [2024-11-19 16:42:19.824446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.824474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.824562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.824588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.824674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.824700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.824818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.824848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.824939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.824966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.825087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.825113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.825203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.825229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.825312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.825338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.825450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.825477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.825586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.825612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.825721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.825748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.825854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.825880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.825981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.826008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.826124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.826151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.826270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.826296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.826422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.826447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.826535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.826561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.826687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.826713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.826827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.826853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.826932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.826958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.827042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.827082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.827170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.827197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.827272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.827298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.827403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.827429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.827511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.827538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.827619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.827646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.827764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.827795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.827913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.827939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.828052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.828086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.749 qpair failed and we were unable to recover it. 00:36:29.749 [2024-11-19 16:42:19.828176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.749 [2024-11-19 16:42:19.828203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.828321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.828348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.828461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.828500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.828591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.828618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.828707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.828733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.828822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.828848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.828932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.828958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.829033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.829059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.829204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.829230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.829319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.829344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.829434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.829461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.829540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.829567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.829683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.829710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.829821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.829847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.829928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.829960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.830109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.830137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.830220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.830247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.830361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.830388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.830528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.830554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.830666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.830693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.830791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.830819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.830918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.830957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.831086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.831114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.831226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.831252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.831361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.831387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.831477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.831504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.831595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.831620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.750 [2024-11-19 16:42:19.831734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.750 [2024-11-19 16:42:19.831759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.750 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.831907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.831933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.832045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.832079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.832203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.832229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.832341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.832367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.832450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.832477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.832619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.832645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.832771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.832810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.832925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.832952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.833079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.833107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.833228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.833254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.833339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.833365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.833479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.833505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.833600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.833632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.833767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.833797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.833929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.833968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.834113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.834141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.834228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.834254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.834337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.834363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.834475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.834501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.834583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.834610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.834748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.834774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.834930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.834969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.835109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.835148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.835296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.835325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.835468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.835494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.835602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.835650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.835741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.835768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.835896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.835924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.836018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.836045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.836174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.836201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.836310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.836336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.836455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.836481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.751 qpair failed and we were unable to recover it. 00:36:29.751 [2024-11-19 16:42:19.836626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.751 [2024-11-19 16:42:19.836653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.836770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.836798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.836886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.836912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.837001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.837029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.837158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.837186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.837330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.837356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.837464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.837490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.837609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.837635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.837776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.837823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.837952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.837978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.838065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.838106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.838221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.838247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.838358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.838383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.838496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.838522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.838606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.838632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.838727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.838765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.838886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.838914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.839050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.839100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.839202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.839230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.839322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.839348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.839485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.839511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.839625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.839665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.839797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.839824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.839905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.839933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.840017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.840043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.840161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.840188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.840316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.840343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.840451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.840477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.840590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.840616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.840702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.840728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.840835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.752 [2024-11-19 16:42:19.840861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.752 qpair failed and we were unable to recover it. 00:36:29.752 [2024-11-19 16:42:19.840999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.841025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.841150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.841178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.841286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.841325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.841473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.841500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.841596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.841621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.841733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.841758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.841872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.841897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.841990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.842016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.842114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.842143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.842282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.842321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.842443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.842490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.842621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.842669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.842784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.842810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.842948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.842974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.843114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.843141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.843254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.843280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.843366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.843395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.843509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.843540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.843629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.843655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.843763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.843789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.843887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.843926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.844051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.844086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.844208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.844234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.844382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.844409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.844605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.844631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.844720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.844747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.844871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.844898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.845014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.845039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.845165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.845193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.845272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.845298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.845406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.845431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.845516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.845541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.753 [2024-11-19 16:42:19.845653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.753 [2024-11-19 16:42:19.845678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.753 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.845787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.845813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.845925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.845954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.846036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.846064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.846199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.846238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.846354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.846389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.846504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.846531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.846645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.846671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.846789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.846816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.846942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.846981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.847080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.847110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.847219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.847246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.847345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.847371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.847469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.847496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.847643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.847671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.847822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.847883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.847992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.848019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.848114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.848141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.848257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.848283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.848368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.848394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.848488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.848513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.848600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.848626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.848714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.848741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.848853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.848878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.848969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.848997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.849127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.849160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.849270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.849297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.849423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.849449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.849568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.849595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.849738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.849764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.849884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.849911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.850084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.850125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.850218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.850246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.850356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.850383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.754 [2024-11-19 16:42:19.850488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.754 [2024-11-19 16:42:19.850514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.754 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.850608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.850646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.850813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.850867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.850996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.851035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.851159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.851188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.851314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.851342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.851463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.851489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.851598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.851624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.851742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.851770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.851874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.851912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.852004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.852031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.852131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.852159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.852279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.852305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.852444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.852470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.852582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.852608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.852719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.852744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.852857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.852883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.853012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.853039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.853138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.853172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.853294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.853323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.853534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.853589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.853731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.853783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.853923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.853949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.854066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.854101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.854239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.854278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.854392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.854419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.854615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.854665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.854809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.854865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.854981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.855007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.855096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.855123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.855319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.855345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.855491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.855517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.855633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.855660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.855779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.755 [2024-11-19 16:42:19.855806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.755 qpair failed and we were unable to recover it. 00:36:29.755 [2024-11-19 16:42:19.855913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.855942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.856032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.856058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.856166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.856193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.856309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.856335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.856449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.856475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.856560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.856586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.856673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.856699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.856819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.856857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.856958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.856996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.857124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.857153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.857271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.857298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.857423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.857453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.857570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.857596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.857764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.857791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.857928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.857954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.858096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.858123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.858204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.858230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.858314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.858340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.858465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.858513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.858678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.858729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.858843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.858869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.859015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.859044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.859181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.859220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.859313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.859339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.859423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.859453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.859584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.859644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.859794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.859842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.859955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.859982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.860095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.860122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.756 [2024-11-19 16:42:19.860214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.756 [2024-11-19 16:42:19.860241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.756 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.860347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.860373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.860489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.860515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.860607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.860634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.860751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.860776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.860884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.860910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.861011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.861037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.861127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.861154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.861255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.861293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.861392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.861420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.861499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.861525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.861635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.861660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.861872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.861912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.862039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.862067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.862170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.862197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.862316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.862343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.862477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.862532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.862645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.862699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.862817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.862843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.862958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.862984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.863087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.863114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.863227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.863253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.863342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.863373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.863487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.863513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.863624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.863650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.863739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.863769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.863881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.863908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.864044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.864075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.864157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.864184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.864271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.864298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.864411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.864439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.864524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.864550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.757 [2024-11-19 16:42:19.864680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.757 [2024-11-19 16:42:19.864719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.757 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.864810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.864836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.864948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.864975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.865055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.865089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.865203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.865230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.865338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.865364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.865446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.865472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.865587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.865616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.865735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.865762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.865901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.865927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.866013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.866039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.866133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.866161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.866255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.866281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.866395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.866422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.866511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.866538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.866628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.866655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.866804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.866830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.866951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.866977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.867061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.867094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.867175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.867201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.867312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.867339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.867415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.867441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.867556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.867583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.867711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.867750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.867881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.867920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.868075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.868103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.868198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.868225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.868335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.868361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.868474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.868500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.868596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.868623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.758 [2024-11-19 16:42:19.868715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.758 [2024-11-19 16:42:19.868749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.758 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.868828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.868854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.868994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.869019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.869106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.869133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.869253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.869277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.869390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.869416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.869547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.869609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.869716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.869741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.869881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.869907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.869995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.870021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.870146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.870172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.870264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.870288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.870371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.870396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.870508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.870533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.870611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.870636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.870774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.870799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.870925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.870965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.871055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.871093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.871234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.871261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.871410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.871436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.871604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.871662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.871868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.871918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.872061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.872094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.872205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.872232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.872341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.872368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.872556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.872607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.872720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.872747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.872830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.872861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.872968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.872995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.759 [2024-11-19 16:42:19.873112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.759 [2024-11-19 16:42:19.873139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.759 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.873221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.873247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.873333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.873359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.873475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.873501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.873579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.873605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.873729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.873757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.873851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.873890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.874009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.874037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.874166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.874193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.874272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.874299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.874439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.874465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.874581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.874608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.874705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.874732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.874857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.874896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.874987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.875014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.875156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.875183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.875299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.875325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.875442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.875468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.875553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.875579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.875719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.875746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.875873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.875912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.876076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.876115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.876212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.876238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.876353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.876378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.876454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.876480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.876600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.876655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.876768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.876826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.876913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.876938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.760 [2024-11-19 16:42:19.877079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.760 [2024-11-19 16:42:19.877106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.760 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.877194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.877220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.877295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.877321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.877431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.877456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.877576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.877602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.877684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.877709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.877799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.877825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.877940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.877968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.878082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.878109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.878215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.878241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.878355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.878386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.878476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.878502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.878646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.878672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.878796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.878822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.878929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.878969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.879117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.879146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.879261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.879288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.879401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.879427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.879538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.879565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.879678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.879706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.879813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.879840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.879922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.879948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.880030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.880056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.880146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.880172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.880301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.761 [2024-11-19 16:42:19.880339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.761 qpair failed and we were unable to recover it. 00:36:29.761 [2024-11-19 16:42:19.880498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.880549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.880673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.880723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.880883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.880934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.881049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.881081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.881198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.881223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.881338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.881365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.881455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.881481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.881577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.881615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.881779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.881829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.881947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.881972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.882085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.882112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.882225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.882250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.882335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.882365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.882455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.882481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.882568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.882593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.882700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.882725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.882862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.882887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.882994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.883020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.883132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.883157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.883271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.883296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.883410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.883435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.883545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.883570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.883683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.883707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.883802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.883831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.883920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.883946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.884080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.884120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.884247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.884275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.884366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.884394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.884487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.884514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.884603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.884629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.884718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.884743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.884856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.884882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.884992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.885017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.885099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.762 [2024-11-19 16:42:19.885125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.762 qpair failed and we were unable to recover it. 00:36:29.762 [2024-11-19 16:42:19.885211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.885235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.885343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.885369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.885458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.885483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.885590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.885615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.885726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.885751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.885848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.885892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.885992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.886020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.886105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.886132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.886249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.886275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.886384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.886409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.886498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.886524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.886641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.886667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.886752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.886776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.886866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.886905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.887028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.887054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.887186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.887224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.887314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.887342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.887434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.887462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.887583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.887609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.887727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.887753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.887839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.887864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.887982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.888009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.888147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.888174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.888301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.888339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.888491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.888542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.888691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.888742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.888851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.888877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.888964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.888990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.889081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.889111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.889220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.889246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.889341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.889368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.889451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.763 [2024-11-19 16:42:19.889478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.763 qpair failed and we were unable to recover it. 00:36:29.763 [2024-11-19 16:42:19.889596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.889624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.889740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.889766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.889902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.889941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.890035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.890061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.890224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.890250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.890340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.890365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.890476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.890501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.890620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.890649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.890734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.890762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.890854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.890881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.890990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.891017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.891098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.891125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.891205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.891232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.891376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.891436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.891580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.891636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.891750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.891776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.891871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.891898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.892012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.892037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.892131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.892159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.892272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.892297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.892450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.892499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.892604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.892660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.892818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.892845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.892959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.892986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.893089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.893129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.893229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.893255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.893341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.893367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.893482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.893507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.893586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.893612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.893693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.893718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.893828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.893854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.764 [2024-11-19 16:42:19.893965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.764 [2024-11-19 16:42:19.893990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.764 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.894126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.894153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.894232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.894257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.894338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.894363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.894444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.894469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.894578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.894603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.894688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.894713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.894797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.894822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.894962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.894990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.895109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.895141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.895250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.895277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.895362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.895388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.895463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.895489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.895600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.895629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.895767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.895827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.895918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.895943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.896080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.896106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.896219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.896246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.896329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.896354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.896495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.896522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.896640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.896668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.896772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.896830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.896940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.896967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.897049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.897083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.897177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.897203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.897294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.897320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.897398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.897425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.897542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.897568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.897682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.897709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.897790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.897816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.897928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.897955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.898076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.898104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.898190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.898216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.765 qpair failed and we were unable to recover it. 00:36:29.765 [2024-11-19 16:42:19.898307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.765 [2024-11-19 16:42:19.898332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.898471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.898519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.898704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.898728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.898836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.898867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.898957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.898984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.899064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.899098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.899211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.899249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.899401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.899455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.899598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.899646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.899734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.899761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.899843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.899870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.899953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.899979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.900066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.900099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.900176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.900200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.900280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.900305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.900447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.900472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.900590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.900617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.900723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.900762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.900862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.900890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.900999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.901025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.901126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.901163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.901274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.901300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.901388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.901414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.901533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.901559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.901649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.901677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.901819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.901846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.901930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.901955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.902036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.902061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.902214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.902239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.902379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.902405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.902543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.902596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.902765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.902820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.766 [2024-11-19 16:42:19.902931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.766 [2024-11-19 16:42:19.902958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.766 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.903077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.903104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.903188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.903214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.903331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.903358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.903552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.903578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.903716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.903742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.903857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.903883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.903975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.904001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.904108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.904135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.904214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.904241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.904355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.904382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.904496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.904522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.904651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.904677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.904784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.904812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.904937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.904976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.905082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.905111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.905230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.905258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.905341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.905368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.905455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.905483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.905575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.905603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.905718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.905743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.905853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.905891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.906043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.906076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.906162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.906189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.906271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.906299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.767 qpair failed and we were unable to recover it. 00:36:29.767 [2024-11-19 16:42:19.906416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.767 [2024-11-19 16:42:19.906464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.906599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.906653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.906742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.906770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.906897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.906925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.907015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.907043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.907195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.907222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.907307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.907334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.907420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.907446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.907528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.907554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.907673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.907700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.907854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.907892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.907987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.908015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.908106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.908133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.908221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.908251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.908388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.908413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.908502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.908527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.908671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.908699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.908809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.908836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.908932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.908960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.909045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.909077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.909225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.909251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.909379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.909405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.909622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.909689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.909834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.909900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.910013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.910040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.910188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.910216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.910307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.910333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.910453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.910479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.910643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.910695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.910807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.910833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.910917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.768 [2024-11-19 16:42:19.910942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.768 qpair failed and we were unable to recover it. 00:36:29.768 [2024-11-19 16:42:19.911052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.911085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.911194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.911233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.911349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.911375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.911470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.911510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.911665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.911716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.911817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.911856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.911985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.912013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.912100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.912128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.912235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.912261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.912381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.912407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.912487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.912513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.912601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.912627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.912743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.912768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.912915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.912941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.913030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.913057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.913152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.913180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.913297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.913323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.913440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.913465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.913589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.913644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.913801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.913861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.913976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.914003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.914099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.914126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.914265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.914295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.914378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.914403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.914508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.914533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.914613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.914638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.914729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.914753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.914898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.914924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.915037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.769 [2024-11-19 16:42:19.915061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.769 qpair failed and we were unable to recover it. 00:36:29.769 [2024-11-19 16:42:19.915161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.915191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.915334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.915361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.915472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.915498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.915644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.915670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.915783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.915809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.915904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.915930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.916019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.916046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.916146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.916175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.916289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.916315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.916424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.916450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.916563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.916589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.916683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.916709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.916816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.916842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.916959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.916985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.917102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.917128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.917223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.917248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.917325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.917351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.917491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.917515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.917620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.917677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.917770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.917797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.917881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.917914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.917994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.918021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.918109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.918136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.918245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.918272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.918358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.918384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.918529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.918555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.918670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.918696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.918783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.918809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.918893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.918920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.770 qpair failed and we were unable to recover it. 00:36:29.770 [2024-11-19 16:42:19.919008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.770 [2024-11-19 16:42:19.919035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.919134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.919162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.919282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.919310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.919399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.919425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.919616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.919642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.919785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.919812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.919926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.919953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.920095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.920122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.920241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.920268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.920379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.920405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.920521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.920548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.920713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.920773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.920870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.920898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.921003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.921029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.921150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.921177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.921287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.921312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.921419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.921444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.921528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.921552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.921640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.921665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.921757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.921781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.921864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.921889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.921962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.921987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.922100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.922126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.922269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.922294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.922401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.922426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.922540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.922565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.922681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.922709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.922815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.922854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.923013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.923053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.923155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.923183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.923270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.923297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.923390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.923416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.923503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.923561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.771 qpair failed and we were unable to recover it. 00:36:29.771 [2024-11-19 16:42:19.923672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.771 [2024-11-19 16:42:19.923700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.923892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.923942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.924059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.924091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.924178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.924204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.924317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.924343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.924456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.924482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.924595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.924622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.924737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.924762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.924874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.924898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.925006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.925030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.925156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.925187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.925275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.925304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.925400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.925427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.925569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.925595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.925689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.925715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.925794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.925820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.925912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.925939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.926046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.926083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.926176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.926202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.926314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.926339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.926459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.926484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.926593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.926618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.926708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.926734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.926825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.926850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.926941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.926980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.927102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.927134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.927253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.927279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.772 [2024-11-19 16:42:19.927363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.772 [2024-11-19 16:42:19.927390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.772 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.927503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.927529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.927611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.927637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.927778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.927804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.927901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.927930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.928055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.928100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.928224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.928251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.928340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.928366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.928447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.928473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.928647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.928696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.928808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.928834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.928972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.928997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.929135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.929174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.929297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.929324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.929443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.929468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.929606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.929656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.929737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.929762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.929891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.929931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.930081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.930109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.930205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.930233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.930321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.930347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.930485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.930534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.930674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.930723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.930808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.930835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.930952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.930977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.931057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.931097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.931175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.931199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.931315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.931366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.931509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.931556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.931700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.931749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.931835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.931861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.931950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.931978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.932098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.932125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.932216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.773 [2024-11-19 16:42:19.932244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.773 qpair failed and we were unable to recover it. 00:36:29.773 [2024-11-19 16:42:19.932388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.932431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.932573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.932618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.932758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.932784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.932904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.932931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.933045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.933076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.933224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.933252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.933333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.933359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.933444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.933470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.933559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.933586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.933721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.933770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.933908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.933934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.934055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.934098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.934196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.934224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.934334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.934373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.934567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.934620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.934771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.934812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.934919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.934945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.935138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.935170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.935306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.935344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.935470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.935496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.935617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.935642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.935747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.935798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.935884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.935912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.936033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.936079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.936205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.936232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.936343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.936369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.936488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.936515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.936654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.936680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.936768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.936794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.936881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.936908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.937007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.937046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.937136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.937168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.937278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.774 [2024-11-19 16:42:19.937304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.774 qpair failed and we were unable to recover it. 00:36:29.774 [2024-11-19 16:42:19.937439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.937488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.937574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.937599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.937686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.937713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.937803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.937832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.937909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.937935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.938025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.938054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.938180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.938206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.938297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.938322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.938435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.938460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.938574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.938601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.938744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.938771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.938862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.938888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.938976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.939003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.939122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.939161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.939266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.939304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.939421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.939448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.939590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.939643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.939797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.939844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.939933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.939960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.940080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.940108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.940194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.940220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.940309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.940335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.940414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.940441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.940561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.940587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.940705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.940732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.940815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.940846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.940999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.941025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.941121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.941148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.941237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.941263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.941340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.941366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.941452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.775 [2024-11-19 16:42:19.941479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.775 qpair failed and we were unable to recover it. 00:36:29.775 [2024-11-19 16:42:19.941624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.941650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.941791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.941817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.941933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.941959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.942049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.942090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.942179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.942207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.942323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.942350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.942459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.942485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.942577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.942616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.942771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.942821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.942964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.942990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.943105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.943133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.943221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.943248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.943332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.943359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.943437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.943465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.943562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.943601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.943723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.943751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.943871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.943899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.944013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.944039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.944127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.944152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.944242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.944269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.944436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.944484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.944621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.944660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.944816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.944842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.944989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.945017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.945160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.945186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.945274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.945302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.945482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.945534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.945676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.945727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.945868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.945894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.946003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.946030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.946159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.946185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.776 [2024-11-19 16:42:19.946328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.776 [2024-11-19 16:42:19.946354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.776 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.946526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.946586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.946764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.946790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.946908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.946941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.947082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.947109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.947221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.947248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.947366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.947392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.947535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.947561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.947690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.947729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.947887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.947936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.948030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.948056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.948182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.948209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.948291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.948316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.948395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.948421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.948529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.948555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.948667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.948692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.948763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.948788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.948908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.948933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.949015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.949039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.949143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.949168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.949286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.949314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.949401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.949427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.949516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.949543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.949683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.949709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.949850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.949876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.949965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.949992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.950108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.950135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.950226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.950251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.950341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.950366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.950449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.950473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.950590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.950624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.950715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.950741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.950826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.777 [2024-11-19 16:42:19.950854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.777 qpair failed and we were unable to recover it. 00:36:29.777 [2024-11-19 16:42:19.950972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.950998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.951105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.951132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.951288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.951314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.951431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.951457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.951544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.951570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.951659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.951686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.951812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.951840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.951952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.951977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.952057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.952089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.952173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.952198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.952287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.952312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.952430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.952457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.952567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.952593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.952708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.952735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.952880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.952906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.952987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.953014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.953158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.953196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.953288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.953315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.953406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.953431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.953547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.953573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.953682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.953707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.953783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.953808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.953883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.953908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.954029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.954075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.954203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.954232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.954349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.954375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.954486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.778 [2024-11-19 16:42:19.954512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.778 qpair failed and we were unable to recover it. 00:36:29.778 [2024-11-19 16:42:19.954600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.954626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.954739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.954765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.954881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.954907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.954991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.955016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.955145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.955173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.955284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.955312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.955429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.955455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.955550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.955576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.955705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.955732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.955845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.955872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.955985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.956017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.956140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.956166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.956269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.956293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.956375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.956400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.956544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.956570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.956657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.956682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.956804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.956831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.956922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.956949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.957063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.957096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.957180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.957206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.957330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.957357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.957491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.957518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.957606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.957632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.957745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.957773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.957933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.957972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.958122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.958150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.958262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.958286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.958428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.958473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.958594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.958642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.958787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.958813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.958924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.958951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.959088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.959127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.779 qpair failed and we were unable to recover it. 00:36:29.779 [2024-11-19 16:42:19.959219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.779 [2024-11-19 16:42:19.959247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.959322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.959347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.959490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.959537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.959646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.959673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.959756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.959782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.959899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.959932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.960045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.960078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.960222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.960249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.960392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.960417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.960524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.960572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.960686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.960711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.960792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.960817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.960900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.960925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.961040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.961073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.961199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.961225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.961354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.961381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.961525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.961551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.961634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.961660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.961775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.961802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.961895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.961921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.962029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.962055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.962156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.962184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.962307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.962335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.962438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.962462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.962571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.962596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.962681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.962705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.962809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.962848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.962998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.963025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.963178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.963205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.963285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.963312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.963420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.963446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.963557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.963583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.780 [2024-11-19 16:42:19.963677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.780 [2024-11-19 16:42:19.963704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.780 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.963793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.963818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.963953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.963979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.964095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.964121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.964209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.964236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.964354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.964379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.964463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.964488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.964624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.964650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.964735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.964760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.964836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.964864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.964976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.965002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.965118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.965144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.965239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.965265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.965405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.965431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.965525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.965551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.965637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.965664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.965759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.965798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.965923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.965950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.966063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.966096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.966237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.966264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.966346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.966372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.966482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.966508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.966597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.966624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.966724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.966763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.966910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.966937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.967052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.967084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.967175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.967202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.967326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.967355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.967469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.967497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.967575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.967601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.967687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.967713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.967794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.967821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.781 [2024-11-19 16:42:19.967948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.781 [2024-11-19 16:42:19.967987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.781 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.968141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.968169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.968306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.968332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.968419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.968446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.968568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.968594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.968732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.968760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.968850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.968875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.968989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.969015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.969107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.969143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.969273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.969299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.969378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.969404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.969509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.969534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.969626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.969651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.969745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.969770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.969855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.969881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.970019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.970044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.970223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.970249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.970394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.970419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.970535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.970560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.970651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.970676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.970753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.970778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.970862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.970888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.971006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.971031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.971185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.971212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.971323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.971347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.971485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.971511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.971596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.971621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.971730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.971755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.971827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.971853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.971992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.972017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.972109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.972135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.972248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.972273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.782 qpair failed and we were unable to recover it. 00:36:29.782 [2024-11-19 16:42:19.972390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.782 [2024-11-19 16:42:19.972416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.972533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.972558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.972674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.972699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.972813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.972839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.972981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.973006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.973093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.973119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.973236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.973261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.973375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.973401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.973509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.973535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.973625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.973650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.973768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.973793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.973941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.973966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.974061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.974098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.974185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.974211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.974299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.974323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.974434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.974460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.974550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.974575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.974716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.974742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.974816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.974841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.974928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.974954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.975040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.975065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.975166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.975192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.975304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.975329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.975414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.975441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.975532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.975557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.975693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.975719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.783 [2024-11-19 16:42:19.975805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.783 [2024-11-19 16:42:19.975830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.783 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.975904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.975929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.976036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.976080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.976225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.976252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.976378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.976404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.976503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.976529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.976643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.976669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.976780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.976805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.976886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.976913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.977004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.977030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.977154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.977181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.977302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.977327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.977463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.977488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.977573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.977598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.977711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.977737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.977825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.977850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.977958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.977984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.978057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.978089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.978213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.978241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.978323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.978349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.978435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.978462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.978549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.978575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.978713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.978739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.978834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.978873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.978995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.979024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.979133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.979160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.979287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.979314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.979426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.979452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.979543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.979570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.979686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.979712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.979820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.979846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.979987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.980018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.980136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.980163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.980286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.980313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.980419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.980445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.980564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.980590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.784 qpair failed and we were unable to recover it. 00:36:29.784 [2024-11-19 16:42:19.980700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.784 [2024-11-19 16:42:19.980726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.980851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.980878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.980990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.981017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.981107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.981133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.981245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.981271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.981355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.981380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.981488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.981514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.981632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.981657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.981742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.981768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.981893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.981918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.982037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.982063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.982168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.982193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.982280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.982304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.982429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.982454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.982531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.982556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.982667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.982692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.982776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.982802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.982883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.982909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.983000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.983025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.983127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.983152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.983240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.983267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.983414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.983439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.983575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.983604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.983720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.983745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.983867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.983891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.983973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.983998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.984087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.984114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.984200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.984225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.984312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.984337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.984456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.984481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.984567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.785 [2024-11-19 16:42:19.984593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.785 qpair failed and we were unable to recover it. 00:36:29.785 [2024-11-19 16:42:19.984708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.984732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.984813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.984838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.984955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.984979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.985063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.985093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.985205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.985230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.985315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.985340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.985427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.985452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.985592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.985617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.985729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.985754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.985866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.985893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.986010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.986035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.986155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.986181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.986274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.986300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.986410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.986436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.986548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.986573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.986682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.986707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.986847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.986873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.986961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.986986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.987083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.987109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.987223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.987249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.987361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.987387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.987472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.987498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.987579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.987604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.987677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.987703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.987817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.987843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.987951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.987976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.988092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.988118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.988204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.988229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.988304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.988329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.988408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.988433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.988523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.988548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.988659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.988684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.988812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.988851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.988979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.989006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.989129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.989156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.989251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.989276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.989358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.989384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.786 [2024-11-19 16:42:19.989478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.786 [2024-11-19 16:42:19.989504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.786 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.989594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.989619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.989726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.989765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.989888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.989914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.990004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.990030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.990134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.990160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.990242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.990268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.990403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.990449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.990593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.990641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.990763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.990789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.990904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.990929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.991040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.991066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.991186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.991212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.991327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.991352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.991488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.991514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.991661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.991686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.991776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.991803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.991923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.991949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.992036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.992065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.992220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.992259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.992358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.992386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.992496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.992523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.992645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.992672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.992757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.992783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.992866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.992891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.992999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.993025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.993151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.993177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.993270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.993296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.993404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.993430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.993551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.993577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.993714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.993739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.993853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.993879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.993994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.994019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.994117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.994143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.994289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.994315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.994432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.994457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.994576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.994602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.994686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.994711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.994794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.994819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.994964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.994990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.995108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.995134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.787 [2024-11-19 16:42:19.995224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.787 [2024-11-19 16:42:19.995250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.787 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.995337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.995363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.995472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.995498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.995612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.995637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.995728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.995754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.995837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.995863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.996002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.996028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.996116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.996142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.996312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.996351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.996449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.996477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.996568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.996605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.996728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.996754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.996839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.996866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.997022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.997061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.997188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.997216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.997312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.997340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.997418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.997444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.997624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.997671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.997818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.997868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.997982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.998008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.998101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.998129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.998211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.998236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.998329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.998354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.998464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.998490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.998601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.998626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.998742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.998767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.998855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.998881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.998987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.999013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.999122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.999148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.999231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.999256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.999329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.999355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.999437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.999463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.788 qpair failed and we were unable to recover it. 00:36:29.788 [2024-11-19 16:42:19.999572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.788 [2024-11-19 16:42:19.999597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:19.999669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:19.999695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:19.999780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:19.999806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:19.999888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:19.999913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:19.999997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.000026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.000175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.000202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.000315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.000341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.000449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.000475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.000562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.000590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.000711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.000737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.000832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.000859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.000980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.001006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.001112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.001139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.001256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.001283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.001361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.001388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.001474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.001501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.001586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.001618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.001714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.001741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.001832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.001858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.001975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.002001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.002102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.002133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.002212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.002239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.002318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.002344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.002422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.002448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.002564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.002589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.002703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.002729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.002847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.002873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.002957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.002984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.003089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.003116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.003201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.003228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.003319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.003346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.003473] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451970 is same with the state(6) to be set 00:36:29.789 [2024-11-19 16:42:20.003600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.003628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.003706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.003732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.003817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.003844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.003984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.004010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.004114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.004146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.004276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.004315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.004464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.004491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.004633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.004682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.004801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.004838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.004945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.004977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.005092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.789 [2024-11-19 16:42:20.005136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.789 qpair failed and we were unable to recover it. 00:36:29.789 [2024-11-19 16:42:20.005260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.005289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.005382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.005410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.005497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.005524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.005642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.005692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.005824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.005869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.005979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.006019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.006130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.006159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.006253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.006279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.006368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.006405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.006559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.006598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.006762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.006811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.006933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.006960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.007050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.007089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.007179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.007205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.007291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.007323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.007438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.007465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.007605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.007632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.007742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.007769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.007875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.007907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.007990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.008017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.008103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.008130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.008227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.008254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.008356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.008390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.008546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.008586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.008777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.008826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.008918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.008945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.009066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.009111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.009204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.009232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.009332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.009359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.009467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.009519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.009625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.009664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.009784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.009811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.009922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.009948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.010064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.010100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.010188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.010214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.010300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.010326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.010439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.010464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.010545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.010570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.010652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.010678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.010819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.010845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.790 [2024-11-19 16:42:20.010956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.790 [2024-11-19 16:42:20.010981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.790 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.011103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.011132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.011228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.011258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.011341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.011369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.011461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.011488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.011602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.011628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.011740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.011766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.011882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.011910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.012030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.012057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.012152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.012179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.012272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.012299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.012441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.012468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.012548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.012575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.012695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.012722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.012795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.012821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.012941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.012967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.013046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.013080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.013190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.013217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.013355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.013381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.013500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.013526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.013645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.013671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.013790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.013816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.013958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.013984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.014102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.014130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.014225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.014251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.014337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.014362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.014476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.014502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.014592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.014618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.014767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.014793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.014881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.014908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.015048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.015098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.015255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.015283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.015367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.015394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.015499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.015539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.015723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.015780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.015934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.015983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.016095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.016122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.016203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.016229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.016323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.016360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.791 [2024-11-19 16:42:20.016516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.791 [2024-11-19 16:42:20.016563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.791 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.016706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.016754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.016873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.016907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.017054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.017087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.017183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.017211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.017322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.017349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.017468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.017496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.017581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.017607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.017751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.017802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.017918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.017945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.018091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.018121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.018203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.018229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.018345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.018373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.018523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.018562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.018765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.018818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.018906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.018932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.019055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.019090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.019230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.019257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.019343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.019370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.019491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.019517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.019665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.019705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.019871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.019927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.020045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.020083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.020204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.020230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.020338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.020364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.020443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.020469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.020646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.020697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.020836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.020887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.020973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.021001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.021115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.021143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.021275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.021303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.021390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.021417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.021495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.021522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.021608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.021635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.021751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.021780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.021871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.021898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.022038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.022066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.022161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.022187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.792 qpair failed and we were unable to recover it. 00:36:29.792 [2024-11-19 16:42:20.022309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.792 [2024-11-19 16:42:20.022338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.022433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.022459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.022596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.022641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.022770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.022819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.022923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.022949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.023045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.023080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.023196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.023223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.023317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.023343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.023424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.023451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.023540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.023566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.023663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.023692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.023807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.023835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.023950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.023990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.024130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.024159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.024279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.024305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.024421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.024470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.024585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.024612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.024758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.024786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.024906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.024933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.025049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.025083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.025175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.025202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.025312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.025338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.025454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.025502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.025649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.025688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.025805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.025833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.025934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.025974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.026129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.026157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.026248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.026274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.026358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.026384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.026475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.026502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.026689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.026728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.026913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.026973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.027087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.027114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.027202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.027230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.027313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.793 [2024-11-19 16:42:20.027340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.793 qpair failed and we were unable to recover it. 00:36:29.793 [2024-11-19 16:42:20.027451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.027478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.027593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.027620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.027732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.027759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.027878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.027906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.028019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.028045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.028138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.028167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.028255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.028283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.028398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.028425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.028544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.028570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.028689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.028716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.028855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.028882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.028997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.029024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.029122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.029151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.029251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.029278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.029386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.029413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.029525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.029552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.029633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.029660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.029745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.029771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.029884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.029912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.030066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.030113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.030235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.030263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.030351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.030377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.030497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.030523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.030617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.030650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.030793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.030841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.030962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.030988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.031085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.031111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.031203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.031230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.031319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.031344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.794 [2024-11-19 16:42:20.031487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.794 [2024-11-19 16:42:20.031514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.794 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.031621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.031647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.031731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.031757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.031860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.031886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.031966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.031991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.032102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.032129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.032220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.032247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.032324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.032355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.032448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.032474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.032590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.032616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.032699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.032724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.032808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.032834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.032939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.032978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.033095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.033124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.033215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.033243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.033333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.033360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.033451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.033477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.033575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.033601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.033705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.033739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.033857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.033890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.034011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.034046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.034187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.034226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.034345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.034393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.034534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.034580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.034731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.034765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.034867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.034894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.034984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.035010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.035088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.035115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.035208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.035233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.035317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.035342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.035424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.035451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.035569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.035594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.035692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.035726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.035845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.035872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.035953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.035984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.036119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.036146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.036237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.036263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.795 qpair failed and we were unable to recover it. 00:36:29.795 [2024-11-19 16:42:20.036336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.795 [2024-11-19 16:42:20.036362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.036454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.036481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.036573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.036606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.036737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.036763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.036848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.036875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.036968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.036995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.037113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.037140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.037224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.037250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.037340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.037367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.037458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.037483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.037599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.037625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.037718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.037744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.037832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.037858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.037936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.037962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.038044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.038076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.038178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.038203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.038316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.038342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.038428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.038453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.038533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.038558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.038697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.038723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.038816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.038842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.038958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.038984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.039066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.039102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.039210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.039236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.039321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.039355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.039453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.039479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.039590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.039616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.039720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.039760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.039852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.039881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.039964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.039991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.040082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.040118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.040248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.040274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.040363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.040390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.040487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.040514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.040626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.040652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.040737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.040763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.040870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.796 [2024-11-19 16:42:20.040896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.796 qpair failed and we were unable to recover it. 00:36:29.796 [2024-11-19 16:42:20.040985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.041023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.041140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.041168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.041273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.041318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.041429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.041457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.041599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.041625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.041735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.041761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.041851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.041877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.041975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.042000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.042089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.042125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.042209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.042235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.042358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.042384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.042471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.042497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.042582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.042610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.042702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.042729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.042841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.042868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.042982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.043008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.043132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.043159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.043241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.043268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.043360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.043389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.043469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.043496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.043583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.043610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.043694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.043721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.043811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.043838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.043926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.043952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.044044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.044077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.044171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.044197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.044285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.044311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.044400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.044426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.044542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.044568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.044658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.044685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.044801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.044829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.797 [2024-11-19 16:42:20.044922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.797 [2024-11-19 16:42:20.044949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.797 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.045043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.045076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.045167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.045195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.045278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.045305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.045401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.045429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.045521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.045548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.045637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.045665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.045774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.045800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.045893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.045920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.046000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.046026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.046132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.046159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.046271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.046297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.046385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.046411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.046502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.046528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.046636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.046662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.046772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.046798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.046884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.046910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.047001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.047028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.047132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.047160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:29.798 [2024-11-19 16:42:20.047237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.798 [2024-11-19 16:42:20.047263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:29.798 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.047342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.047379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.047465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.047491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.047574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.047601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.047681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.047707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.047829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.047857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.047983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.048011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.048104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.048136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.048225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.048252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.048338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.048365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.048491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.048529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.048624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.048651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.048739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.048767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.048857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.048883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.048996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.049022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.049113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.049140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.049222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.049248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.049334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.049360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.049465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.049492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.049609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.049635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.049756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.049786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.049902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.049929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.050016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.050042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.050144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.050171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.050266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.050292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.050381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.050407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.050493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.050518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.050597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.050622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.050768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.050794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.050882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.050909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.051002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.051030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.084 [2024-11-19 16:42:20.051137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.084 [2024-11-19 16:42:20.051170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.084 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.051260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.051286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.051387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.051414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.051529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.051555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.051635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.051662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.051779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.051805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.051896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.051922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.052005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.052031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.052140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.052168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.052284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.052309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.052452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.052478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.052569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.052596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.052685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.052712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.052819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.052845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.052935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.052961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.053049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.053084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.053173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.053200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.053278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.053304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.053398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.053426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.053521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.053547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.053666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.053692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.053801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.053828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.053923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.053950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.054048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.054093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.054190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.054216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.054326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.054352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.054448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.054474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.054566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.054592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.054670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.054696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.054784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.054810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.054893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.054918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.055006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.055034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.055144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.055172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.055254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.055280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.055371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.055397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.055479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.055505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.055599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.055629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.085 [2024-11-19 16:42:20.055715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.085 [2024-11-19 16:42:20.055741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.085 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.055827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.055853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.055943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.055968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.056089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.056130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.056215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.056240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.056321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.056348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.056440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.056467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.056557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.056585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.056703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.056730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.056818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.056844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.056954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.056982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.057074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.057102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.057198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.057223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.057306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.057332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.057411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.057437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.057529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.057555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.057642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.057671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.057759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.057787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.057891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.057930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.058052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.058088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.058202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.058228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.058313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.058339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.058425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.058453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.058546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.058584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.058674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.058701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.058798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.058825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.058944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.058970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.059053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.059089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.059207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.059234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.059321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.059347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.059440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.059466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.059552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.059579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.059696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.059723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.059807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.059835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.059933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.059972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.060074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.060103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.060227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.060253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.060339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.060365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.086 qpair failed and we were unable to recover it. 00:36:30.086 [2024-11-19 16:42:20.060475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.086 [2024-11-19 16:42:20.060500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.060583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.060609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.060697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.060722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.060853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.060893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.061002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.061042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.061177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.061210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.061304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.061330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.061412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.061440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.061556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.061583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.061669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.061694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.061802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.061841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.061938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.061966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.062049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.062083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.062205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.062230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.062319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.062345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.062436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.062462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.062551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.062579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.062677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.062703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.062792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.062819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.062914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.062941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.063043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.063092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.063231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.063270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.063365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.063392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.063498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.063523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.063599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.063625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.063710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.063736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.063820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.063846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.063945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.063985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.064110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.064140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.064255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.064281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.064383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.064409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.064496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.064522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.064641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.064669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.064760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.064787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.064867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.064893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.064984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.065011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.065109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.065135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.065224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.087 [2024-11-19 16:42:20.065250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.087 qpair failed and we were unable to recover it. 00:36:30.087 [2024-11-19 16:42:20.065333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.065359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.065443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.065468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.065558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.065583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.065665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.065693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.065774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.065800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.065891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.065918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.066000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.066026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.066112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.066139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.066233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.066259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.066340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.066365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.066451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.066478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.066559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.066584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.066692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.066718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.066829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.066855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.066990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.067029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.067137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.067165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.067271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.067310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.067431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.067459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.067539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.067566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.067680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.067707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.067800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.067829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.067939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.067978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.068067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.068100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.068210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.068236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.068313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.068338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.068419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.068445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.068530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.068555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.068680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.068708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.068797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.068823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.088 [2024-11-19 16:42:20.068935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.088 [2024-11-19 16:42:20.068960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.088 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.069043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.069076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.069163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.069190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.069273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.069298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.069390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.069416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.069499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.069530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.069651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.069677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.069755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.069782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.069865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.069891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.069974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.069999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.070091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.070122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.070206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.070232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.070322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.070349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.070428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.070454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.070536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.070561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.070657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.070686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.070780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.070807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.070923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.070949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.071061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.071092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.071192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.071218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.071307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.071332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.071408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.071434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.071520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.071548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.071678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.071716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.071839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.071866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.071952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.071979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.072060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.072091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.072173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.072199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.072278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.072304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.072388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.072413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.072499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.072524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.072645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.072671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.072772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.072800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.072887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.072916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.073033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.073061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.073156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.073183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.073278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.073305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.073416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.089 [2024-11-19 16:42:20.073442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.089 qpair failed and we were unable to recover it. 00:36:30.089 [2024-11-19 16:42:20.073527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.073554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.073643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.073670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.073788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.073816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.073907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.073934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.074015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.074041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.074138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.074164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.074247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.074273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.074349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.074379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.074468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.074494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.074608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.074633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.074709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.074736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.074826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.074852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.074935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.074960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.075047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.075081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.075175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.075202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.075287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.075313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.075396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.075423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.075503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.075528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.075658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.075697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.075783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.075810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.075900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.075929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.076034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.076061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.076191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.076219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.076309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.076335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.076415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.076440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.076525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.076551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.076637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.076664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.076755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.076781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.076875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.076900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.076981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.077007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.077135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.077161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.077243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.077269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.077346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.077372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.077449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.077475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.077575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.077601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.077687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.077714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.077804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.077831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.090 [2024-11-19 16:42:20.077915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.090 [2024-11-19 16:42:20.077941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.090 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.078025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.078051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.078168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.078194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.078276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.078302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.078390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.078415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.078509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.078535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.078620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.078646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.078725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.078751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.078835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.078861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.078953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.078980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.079078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.079109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.079225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.079251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.079332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.079358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.079478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.079504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.079625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.079651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.079731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.079757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.079835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.079861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.079961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.080001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.080106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.080135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.080224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.080251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.080366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.080393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.080479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.080507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.080650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.080676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.080767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.080795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.080892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.080931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.081029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.081078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.081173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.081200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.081289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.081315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.081397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.081444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.081553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.081580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.081657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.081684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.081799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.081825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.081925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.081964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.082054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.082087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.082171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.082198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.082306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.082332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.082459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.082503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.082598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.082628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.091 [2024-11-19 16:42:20.082717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.091 [2024-11-19 16:42:20.082744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.091 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.082842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.082881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.082964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.082991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.083077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.083104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.083227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.083253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.083338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.083364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.083452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.083478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.083573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.083600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.083689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.083717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.083812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.083842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.083964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.083990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.084081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.084108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.084193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.084220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.084314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.084341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.084451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.084477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.084573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.084600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.084681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.084707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.084790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.084817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.084906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.084932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.085017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.085045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.085162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.085201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.085301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.085330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.085423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.085450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.085541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.085567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.085682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.085708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.085791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.085819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.085939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.085966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.086086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.086115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.086206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.086233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.086315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.086342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.086455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.086482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.086572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.086599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.086693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.086724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.086829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.086868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.086961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.086989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.087117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.087145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.087219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.087246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.087329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.087356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.092 qpair failed and we were unable to recover it. 00:36:30.092 [2024-11-19 16:42:20.087438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.092 [2024-11-19 16:42:20.087465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.087559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.087598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.087710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.087737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.087862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.087906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.087994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.088020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.088158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.088198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.088288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.088315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.088411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.088438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.088533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.088559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.088684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.088743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.088840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.088868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.088976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.089003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.089096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.089125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.089244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.089270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.089348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.089374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.089474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.089501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.089594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.089620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.089712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.089740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.089835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.089861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.089968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.089995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.090075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.090102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.090190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.090217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.090305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.090331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.090410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.090436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.090552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.090578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.090683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.090709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.090795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.090823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.090918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.090944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.091046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.091100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.091207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.091235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.091316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.091342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.091419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.091445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.091556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.091588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.093 [2024-11-19 16:42:20.091688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.093 [2024-11-19 16:42:20.091714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.093 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.091827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.091853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.091927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.091953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.092054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.092103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.092231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.092258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.092350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.092377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.092463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.092489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.092582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.092613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.092784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.092816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.092895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.092921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.093011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.093040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.093142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.093170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.093251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.093277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.093394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.093421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.093501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.093527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.093642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.093669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.093783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.093811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.093901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.093927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.094019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.094045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.094144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.094171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.094285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.094313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.094397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.094424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.094547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.094580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.094678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.094709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.094808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.094839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.095000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.095026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.095123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.095149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.095231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.095258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.095369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.095396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.095526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.095557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.095756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.095782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.095914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.095943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.096052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.096084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.096204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.096231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.096304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.096331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.096411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.096442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.096555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.096602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.096687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.096715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.094 [2024-11-19 16:42:20.096805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.094 [2024-11-19 16:42:20.096832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.094 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.096914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.096941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.097026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.097052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.097147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.097174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.097257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.097283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.097368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.097394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.097474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.097501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.097597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.097623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.097708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.097736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.097838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.097881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.097979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.098017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.098129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.098158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.098243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.098271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.098364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.098391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.098483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.098510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.098586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.098613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.098700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.098727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.098838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.098864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.098946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.098972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.099062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.099095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.099207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.099233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.099307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.099332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.099411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.099437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.099516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.099542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.099634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.099661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.099762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.099801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.099917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.099944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.100222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.100250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.100329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.100354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.100440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.100466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.100563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.100589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.100703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.100752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.100888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.100923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.101083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.101112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.101224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.101252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.101331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.101357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.101466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.101492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.101576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.095 [2024-11-19 16:42:20.101607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.095 qpair failed and we were unable to recover it. 00:36:30.095 [2024-11-19 16:42:20.101727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.101755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.101855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.101883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.101970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.101996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.102090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.102117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.102200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.102226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.102310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.102335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.102478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.102504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.102608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.102644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.102775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.102800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.102918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.102946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.103085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.103124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.103226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.103254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.103395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.103422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.103539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.103566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.103657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.103684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.103801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.103828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.103932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.103959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.104052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.104090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.104183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.104210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.104301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.104327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.104413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.104440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.104527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.104553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.104647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.104673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.104767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.104794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.104910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.104935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.105028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.105054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.105161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.105188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.105269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.105295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.105401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.105427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.105509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.105535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.105647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.105676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.105765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.105793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.105879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.105905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.105986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.106012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.106113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.106139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.106222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.106247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.106361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.106386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.106466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.096 [2024-11-19 16:42:20.106491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.096 qpair failed and we were unable to recover it. 00:36:30.096 [2024-11-19 16:42:20.106599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.106625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.106710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.106736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.106834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.106862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.106961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.106990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.107082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.107110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.107225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.107250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.107337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.107363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.107455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.107480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.107567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.107593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.107702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.107728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.107839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.107864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.107943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.107969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.108061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.108105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.108199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.108226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.108312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.108337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.108419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.108449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.108535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.108561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.108640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.108666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.108776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.108802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.108892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.108919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.108998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.109026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.109124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.109151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.109237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.109264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.109350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.109376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.109461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.109487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.109562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.109587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.109669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.109695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.109776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.109802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.109891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.109926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.110046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.110080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.110171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.110196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.110307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.110332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.110422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.110446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.110559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.110585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.110682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.110713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.110816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.110840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.110929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.110957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.097 qpair failed and we were unable to recover it. 00:36:30.097 [2024-11-19 16:42:20.111043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.097 [2024-11-19 16:42:20.111082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.111183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.111212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.111297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.111336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.111449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.111477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.111562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.111589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.111701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.111747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.111839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.111864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.111949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.111975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.112052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.112088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.112178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.112203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.112288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.112313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.112419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.112444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.112538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.112563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.112645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.112673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.112756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.112782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.112881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.112907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.113000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.113026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.113131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.113160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.113253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.113284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.113394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.113421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.113553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.113596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.113695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.113723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.113823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.113849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.113963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.113990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.114085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.114115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.114204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.114233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.114322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.114348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.114444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.114475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.114583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.114631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.114789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.114819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.114925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.114952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.115060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.115095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.115202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.115228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.115329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.098 [2024-11-19 16:42:20.115355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.098 qpair failed and we were unable to recover it. 00:36:30.098 [2024-11-19 16:42:20.115464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.115490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.115573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.115600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.115689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.115727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.115863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.115921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.116012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.116039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.116131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.116158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.116248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.116273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.116351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.116377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.116468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.116494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.116584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.116614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.116712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.116739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.116829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.116857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.116969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.116996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.117077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.117104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.117184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.117210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.117325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.117369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.117486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.117529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.117642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.117689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.117840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.117887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.118017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.118055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.118167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.118194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.118278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.118304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.118414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.118458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.118567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.118597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.118694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.118724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.118841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.118869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.119005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.119044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.119158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.119186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.119271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.119296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.119404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.119430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.119558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.119586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.119726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.119770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.119864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.119892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.119972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.119997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.120097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.120124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.120240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.120266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.120362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.120390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.099 qpair failed and we were unable to recover it. 00:36:30.099 [2024-11-19 16:42:20.120473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.099 [2024-11-19 16:42:20.120499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.120584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.120611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.120729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.120755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.120839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.120867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.120945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.120973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.121065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.121101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.121201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.121240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.121329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.121357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.121459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.121491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.121595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.121621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.121764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.121790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.121882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.121908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.122035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.122086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.122176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.122205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.122291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.122323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.122413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.122439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.122547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.122594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.122706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.122732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.122821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.122849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.122945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.122975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.123082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.123112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.123234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.123260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.123351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.123378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.123493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.123520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.123610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.123638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.123727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.123756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.123857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.123896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.124017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.124044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.124152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.124178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.124264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.124291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.124373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.124398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.124485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.124511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.124595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.124621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.124702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.124728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.124820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.124845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.124966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.124995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.125125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.125154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.125250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.100 [2024-11-19 16:42:20.125279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.100 qpair failed and we were unable to recover it. 00:36:30.100 [2024-11-19 16:42:20.125366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.125393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.125525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.125551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.125632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.125658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.125748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.125776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.125864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.125891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.125975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.126000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.126096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.126123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.126223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.126252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.126351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.126390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.126487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.126516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.126635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.126661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.126753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.126779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.126868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.126894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.126973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.127000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.127095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.127122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.127204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.127231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.127327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.127357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.127446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.127475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.127599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.127627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.127719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.127745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.127832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.127859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.127936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.127962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.128040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.128075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.128167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.128194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.128296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.128335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.128426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.128454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.128536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.128562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.128681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.128708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.128794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.128820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.128938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.128965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.129063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.129096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.129175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.129201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.129290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.129318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.129394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.129420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.129521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.129553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.129687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.129732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.129843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.129887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.129964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.101 [2024-11-19 16:42:20.129989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.101 qpair failed and we were unable to recover it. 00:36:30.101 [2024-11-19 16:42:20.130066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.130101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.130195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.130220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.130300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.130325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.130412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.130437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.130560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.130588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.130702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.130733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.130820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.130846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.130931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.130958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.131050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.131082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.131170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.131196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.131305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.131332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.131435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.131464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.131563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.131588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.131704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.131729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.131820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.131845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.131940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.131965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.132085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.132112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.132198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.132223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.132308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.132334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.132422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.132447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.132562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.132589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.132678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.132703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.132782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.132808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.132925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.132950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.133051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.133104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.133193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.133219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.133311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.133338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.133417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.133442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.133531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.133559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.133668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.133694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.133774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.133801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.133881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.133906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.133989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.134019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.102 [2024-11-19 16:42:20.134113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.102 [2024-11-19 16:42:20.134139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.102 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.134222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.134248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.134330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.134355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.134440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.134465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.134555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.134581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.134682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.134722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.134812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.134840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.134926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.134953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.135034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.135066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.135190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.135218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.135311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.135338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.135424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.135450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.135601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.135662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.135782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.135832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.135915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.135941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.136022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.136047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.136172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.136198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.136286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.136311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.136402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.136428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.136510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.136536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.136621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.136646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.136770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.136798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.136882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.136907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.136994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.137023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.137118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.137146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.137233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.137263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.137360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.137387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.137522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.137568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.137674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.137704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.137793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.137818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.137941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.137969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.138056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.138090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.138167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.138192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.138268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.138293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.138423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.138480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.138578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.138609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.138707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.138732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.138809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.138834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.103 [2024-11-19 16:42:20.138924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.103 [2024-11-19 16:42:20.138950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.103 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.139041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.139081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.139173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.139201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.139289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.139317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.139401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.139428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.139516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.139543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.139629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.139655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.139776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.139802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.139923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.139962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.140079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.140117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.140210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.140238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.140327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.140353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.140434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.140460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.140590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.140634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.140719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.140746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.140851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.140889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.140982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.141010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.141101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.141130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.141226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.141252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.141334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.141381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.141475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.141506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.141608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.141641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.141756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.141803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.141914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.141940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.142031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.142058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.142176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.142202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.142285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.142311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.142404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.142432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.142544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.142581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.142683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.142709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.142817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.142849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.142960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.142990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.143085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.143114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.143196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.143222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.143312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.143338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.143450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.143476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.143564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.143591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.143673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.143699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.104 qpair failed and we were unable to recover it. 00:36:30.104 [2024-11-19 16:42:20.143784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.104 [2024-11-19 16:42:20.143810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.143903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.143928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.144017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.144045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.144143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.144170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.144268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.144294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.144380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.144406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.144488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.144514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.144591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.144616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.144703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.144731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.144833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.144871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.144961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.144988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.145085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.145112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.145202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.145227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.145335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.145361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.145475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.145501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.145580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.145606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.145693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.145723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.145812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.145838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.145930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.145958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.146085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.146111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.146194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.146220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.146305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.146330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.146414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.146441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.146529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.146558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.146673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.146701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.146781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.146807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.146938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.146964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.147047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.147079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.147171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.147198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.147288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.147315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.147446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.147483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.147595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.147638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.147760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.147808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.147918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.147944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.148022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.148048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.148151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.148177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.148263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.148289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.105 [2024-11-19 16:42:20.148369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.105 [2024-11-19 16:42:20.148394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.105 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.148487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.148515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.148592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.148618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.148711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.148749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.148839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.148865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.148945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.148971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.149076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.149102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.149195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.149222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.149322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.149347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.149430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.149454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.149541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.149568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.149645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.149671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.149751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.149777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.149860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.149886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.149976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.150005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.150120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.150147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.150233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.150260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.150347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.150372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.150466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.150497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.150595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.150620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.150696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.150726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.150810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.150835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.150915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.150939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.151026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.151053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.151161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.151187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.151287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.151314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.151407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.151433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.151516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.151542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.151627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.151654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.151781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.151828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.151943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.151969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.152049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.152087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.152179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.152205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.152290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.152316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.152406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.152431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.152512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.152537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.152628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.152653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.152738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.152766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.106 [2024-11-19 16:42:20.152849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.106 [2024-11-19 16:42:20.152874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.106 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.152959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.152988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.153107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.153134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.153212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.153238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.153354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.153380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.153471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.153496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.153582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.153610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.153694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.153720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.153799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.153824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.153908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.153936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.154025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.154050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.154177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.154203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.154286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.154311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.154395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.154420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.154505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.154533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.154618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.154645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.154734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.154758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.154848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.154873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.154957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.154982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.155079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.155105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.155185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.155210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.155289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.155314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.155393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.155424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.155507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.155532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.155644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.155670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.155754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.155778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.155875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.155899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.155980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.156005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.156102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.156141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.156227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.156254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.156339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.156365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.156445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.156472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.156559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.156585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.107 [2024-11-19 16:42:20.156703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.107 [2024-11-19 16:42:20.156732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.107 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.156823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.156849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.156925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.156949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.157047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.157082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.157174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.157199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.157277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.157302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.157383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.157409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.157489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.157513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.157607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.157632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.157714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.157738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.157821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.157846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.157957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.157986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.158076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.158104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.158190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.158217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.158311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.158337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.158424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.158453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.158576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.158609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.158728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.158755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.158840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.158866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.158950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.158976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.159063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.159095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.159194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.159219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.159300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.159325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.159405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.159432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.159516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.159543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.159662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.159690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.159780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.159808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.159890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.159917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.160000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.160032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.160130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.160157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.160256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.160282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.160365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.160389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.160469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.160493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.160577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.160601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.160680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.160706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.160793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.160821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.160907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.160935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.161023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.161050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.108 [2024-11-19 16:42:20.161150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.108 [2024-11-19 16:42:20.161175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.108 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.161254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.161279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.161397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.161423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.161518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.161543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.161657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.161685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.161774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.161805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.161892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.161919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.162002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.162027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.162125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.162152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.162234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.162259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.162345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.162370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.162452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.162476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.162557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.162582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.162655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.162680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.162773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.162798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.162890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.162918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.163009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.163037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.163131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.163160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.163254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.163283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.163405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.163431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.163512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.163538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.163618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.163644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.163759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.163785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.163878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.163917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.164015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.164041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.164146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.164175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.164268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.164294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.164411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.164439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.164534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.164567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.164684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.164710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.164820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.164851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.164987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.165013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.165113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.165141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.165236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.165265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.165364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.165403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.165496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.165522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.165635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.165682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.165784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.165815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.109 qpair failed and we were unable to recover it. 00:36:30.109 [2024-11-19 16:42:20.165914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.109 [2024-11-19 16:42:20.165939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.166020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.166044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.166132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.166160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.166254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.166280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.166368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.166395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.166477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.166503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.166588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.166615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.166724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.166750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.166844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.166870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.166956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.166982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.167079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.167108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.167190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.167217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.167304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.167333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.167423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.167450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.167532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.167558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.167650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.167678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.167773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.167805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.167925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.167953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.168066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.168104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.168190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.168216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.168301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.168326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.168414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.168440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.168533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.168559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.168646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.168672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.168752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.168778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.168899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.168925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.169008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.169035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.169134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.169161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.169273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.169299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.169385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.169414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.169525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.169553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.169641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.169669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.169755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.169781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.169872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.169898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.110 [2024-11-19 16:42:20.169984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.110 [2024-11-19 16:42:20.170014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.110 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.170103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.170130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.170219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.170244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.170326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.170352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.170438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.170464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.170581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.170607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.170688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.170713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.170802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.170829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.170918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.170946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.171040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.171079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.171158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.171183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.171269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.171294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.171367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.171392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.171501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.171527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.171645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.171690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.171771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.171796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.171885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.171913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.172002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.172030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.172174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.172202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.172293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.172319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.172408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.172434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.172534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.172565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.172741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.172788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.172881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.172909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.172998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.173027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.173123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.173149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.173241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.173268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.173354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.173382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.173479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.173508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.173598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.173625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.173708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.173733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.173843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.173868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.173949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.173975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.174086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.174112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.111 qpair failed and we were unable to recover it. 00:36:30.111 [2024-11-19 16:42:20.174202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.111 [2024-11-19 16:42:20.174228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.174308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.174334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.174423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.174449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.174597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.174624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.174702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.174728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.174820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.174846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.174959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.174986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.175104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.175131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.175232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.175258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.175344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.175371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.175458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.175485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.175567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.175592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.175668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.175695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.175786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.175812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.175897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.175922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.176004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.176029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.176117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.176143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.176219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.176244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.176322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.176350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.176438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.176465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.176552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.176579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.176700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.176726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.176822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.176851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.176944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.176971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.177086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.177122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.177208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.177233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.177314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.177339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.177427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.177453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.177571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.177597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.177677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.177703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.177828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.177856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.177940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.177967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.178050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.178084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.178178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.178209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.178295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.178321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.178406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.178432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.178515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.178541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.112 [2024-11-19 16:42:20.178629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.112 [2024-11-19 16:42:20.178657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.112 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.178747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.178785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.178874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.178901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.179012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.179038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.179180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.179207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.179290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.179316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.179417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.179450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.179593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.179618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.179722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.179758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.179872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.179900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.179994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.180019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.180101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.180128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.180219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.180245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.180326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.180352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.180437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.180464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.180551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.180576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.180657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.180682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.180777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.180803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.180891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.180915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.180997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.181021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.181121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.181149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.181241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.181266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.181361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.181387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.181469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.181502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.181628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.181667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.181778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.181807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.181921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.181947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.182038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.182064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.182164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.182193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.182311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.182338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.182442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.182491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.182624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.182671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.182757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.182784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.182883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.182908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.182997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.183022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.183123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.183149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.183234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.183259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.183345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.113 [2024-11-19 16:42:20.183372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.113 qpair failed and we were unable to recover it. 00:36:30.113 [2024-11-19 16:42:20.183450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.183476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.183596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.183621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.183725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.183763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.183891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.183918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.184010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.184036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.184141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.184168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.184250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.184276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.184360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.184388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.184477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.184502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.184605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.184631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.184727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.184760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.184871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.184897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.184999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.185039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.185181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.185209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.185302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.185330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.185542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.185574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.185705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.185737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.185854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.185883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.186007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.186032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.186126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.186152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.186269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.186294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.186413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.186439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.186528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.186553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.186643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.186671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.186754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.186781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.186894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.186925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.187015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.187041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.187144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.187170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.187259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.187285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.187419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.187452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.187577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.187624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.187730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.187778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.187918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.187944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.188057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.188091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.188196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.188234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.188328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.188355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.188440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.188467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.188552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.114 [2024-11-19 16:42:20.188579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.114 qpair failed and we were unable to recover it. 00:36:30.114 [2024-11-19 16:42:20.188668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.188694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.188794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.188820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.188928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.188955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.189043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.189077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.189181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.189207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.189291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.189317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.189400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.189425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.189538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.189565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.189649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.189676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.189766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.189791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.189921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.189960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.190048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.190084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.190184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.190210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.190322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.190348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.190428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.190459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.190547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.190575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.190685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.190729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.190814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.190840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.190922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.190949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.191032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.191059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.191151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.191177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.191292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.191318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.191442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.191470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.191570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.191608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.191701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.191728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.191844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.191869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.191954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.191980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.192060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.192094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.192221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.192247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.192333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.192380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.192480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.192529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.192612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.192640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.192720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.192745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.192829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.192855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.192945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.192970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.193089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.193116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.193239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.193265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.115 [2024-11-19 16:42:20.193350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.115 [2024-11-19 16:42:20.193376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.115 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.193464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.193490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.193591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.193632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.193724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.193752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.193841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.193867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.193955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.193981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.194098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.194126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.194215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.194244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.194336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.194364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.194452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.194477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.194591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.194619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.194713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.194740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.194842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.194881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.195001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.195028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.195136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.195162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.195249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.195276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.195366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.195392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.195469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.195500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.195589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.195616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.195726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.195759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.195871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.195904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.196024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.196051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.196188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.196215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.196292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.196317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.196400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.196426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.196562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.196607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.196695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.196721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.196811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.196837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.196925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.196950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.197031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.197057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.197177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.197205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.197300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.197327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.197426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.197452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.116 [2024-11-19 16:42:20.197536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.116 [2024-11-19 16:42:20.197563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.116 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.197651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.197677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.197775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.197801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.197915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.197942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.198081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.198121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.198208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.198234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.198324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.198352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.198452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.198485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.198586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.198615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.198710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.198737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.198814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.198841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.198932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.198958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.199047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.199084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.199174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.199199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.199275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.199300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.199418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.199443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.199521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.199546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.199643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.199669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.199755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.199783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.199871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.199898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.199989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.200015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.200117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.200143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.200234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.200261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.200340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.200367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.200450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.200483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.200574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.200603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.200709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.200747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.200844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.200871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.201010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.201036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.201137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.201164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.201245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.201270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.201378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.201410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.201530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.201565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.201686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.201713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.201820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.201855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.201985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.202011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.202105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.202131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.202216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.202241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.117 [2024-11-19 16:42:20.202338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.117 [2024-11-19 16:42:20.202363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.117 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.202440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.202465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.202543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.202569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.202651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.202677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.202765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.202791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.202918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.202957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.203046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.203080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.203181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.203207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.203289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.203314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.203416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.203441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.203521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.203546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.203645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.203676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.203815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.203841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.203927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.203958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.204039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.204065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.204165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.204190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.204269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.204293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.204375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.204399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.204513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.204537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.204630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.204669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.204767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.204795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.204897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.204928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.205017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.205043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.205132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.205160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.205253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.205280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.205371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.205398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.205482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.205508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.205591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.205617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.205693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.205719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.205845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.205886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.205985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.206013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.206107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.206135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.206229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.206256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.206344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.206372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.206493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.206521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.206639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.206667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.206754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.206805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.206952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.118 [2024-11-19 16:42:20.206980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.118 qpair failed and we were unable to recover it. 00:36:30.118 [2024-11-19 16:42:20.207102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.207128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.207216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.207242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.207362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.207391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.207475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.207501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.207625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.207653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.207752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.207786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.207888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.207913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.208026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.208051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.208150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.208179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.208276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.208303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.208388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.208415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.208520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.208571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.208699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.208749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.208834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.208863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.208942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.208969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.209052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.209086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.209178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.209205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.209288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.209314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.209398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.209425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.209519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.209556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.209791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.209818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.209907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.209935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.210014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.210040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.210130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.210158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.210244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.210270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.210349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.210376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.210509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.210536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.210657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.210685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.210824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.210851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.210949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.210976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.211067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.211101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.211194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.211222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.211361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.211388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.211473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.211500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.211668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.211708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.211829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.211885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.211979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.212007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.119 [2024-11-19 16:42:20.212121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.119 [2024-11-19 16:42:20.212149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.119 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.212240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.212268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.212358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.212384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.212534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.212584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.212701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.212728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.212814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.212848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.212950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.212978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.213109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.213150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.213241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.213268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.213374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.213408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.213506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.213531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.213672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.213697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.213829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.213881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.213979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.214007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.214098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.214126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.214220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.214247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.214363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.214391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.214475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.214501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.214607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.214634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.214722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.214748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.214830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.214858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.215012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.215040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.215143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.215169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.215258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.215284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.215364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.215390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.215503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.215529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.215612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.215640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.215773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.215821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.215932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.215958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.216037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.216063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.216162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.216188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.216279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.216306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.216396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.216425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.216536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.216561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.216644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.216670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.216785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.216810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.216892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.216919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.217038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.217067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.120 [2024-11-19 16:42:20.217181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.120 [2024-11-19 16:42:20.217219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.120 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.217352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.217380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.217492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.217540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.217649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.217675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.217782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.217819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.217957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.217983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.218079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.218108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.218207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.218253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.218363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.218392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.218484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.218510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.218649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.218676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.218760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.218785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.218872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.218899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.219009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.219035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.219130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.219156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.219271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.219297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.219376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.219403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.219515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.219540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.219681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.219708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.219788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.219814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.219893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.219919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.220008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.220034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.220166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.220194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.220276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.220302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.220387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.220414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.220517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.220543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.220668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.220708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.220801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.220830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.220982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.221009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.221095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.221124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.221236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.221262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.221352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.221379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.221468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.221496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.221613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.121 [2024-11-19 16:42:20.221639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.121 qpair failed and we were unable to recover it. 00:36:30.121 [2024-11-19 16:42:20.221727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.221758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.221844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.221870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.221949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.221976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.222084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.222111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.222195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.222221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.222334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.222360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.222436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.222462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.222575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.222604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.222715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.222741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.222888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.222916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.223033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.223060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.223180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.223207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.223291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.223318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.223435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.223462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.223636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.223688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.223775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.223801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.223893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.223920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.224030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.224057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.224178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.224204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.224306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.224343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.224473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.224499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.224597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.224638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.224757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.224786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.224866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.224893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.224998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.225025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.225142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.225170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.225252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.225279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.225410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.225443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.225582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.225633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.225706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.225732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.225871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.225898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.225989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.226019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.226157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.226184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.226267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.226293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.226374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.226401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.226506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.226532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.226675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.226701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.226828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.122 [2024-11-19 16:42:20.226858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.122 qpair failed and we were unable to recover it. 00:36:30.122 [2024-11-19 16:42:20.226947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.226974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.227079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.227107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.227225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.227253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.227382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.227410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.227494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.227522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.227637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.227664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.227785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.227812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.227951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.227977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.228062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.228098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.228224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.228250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.228337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.228363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.228488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.228545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.228654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.228680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.228767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.228795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.228910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.228938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.229080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.229107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.229218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.229248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.229393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.229419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.229512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.229538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.229634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.229673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.229814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.229842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.229956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.229983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.230065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.230099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.230248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.230275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.230368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.230394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.230634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.230684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.230799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.230826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.230963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.230989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.231087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.231116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.231259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.231290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.231408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.231466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.231634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.231679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.231793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.231820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.231939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.231967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.232059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.232095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.232206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.232233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.232352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.123 [2024-11-19 16:42:20.232379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.123 qpair failed and we were unable to recover it. 00:36:30.123 [2024-11-19 16:42:20.232496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.232523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.232666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.232693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.232805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.232832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.232920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.232948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.233103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.233144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.233234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.233261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.233349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.233378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.233465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.233492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.233582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.233609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.233751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.233777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.233915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.233942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.234022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.234048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.234144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.234171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.234261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.234288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.234440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.234467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.234613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.234665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.234809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.234838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.234927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.234955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.235075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.235103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.235240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.235273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.235430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.235495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.235651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.235709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.235923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.235950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.236096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.236124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.236213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.236240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.236330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.236358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.236468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.236494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.236578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.236605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.236711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.236748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.236878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.236903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.237067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.237115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.237243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.237272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.237359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.237386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.237480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.237507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.237624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.237650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.237764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.237826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.237916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.237942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.238027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.124 [2024-11-19 16:42:20.238054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.124 qpair failed and we were unable to recover it. 00:36:30.124 [2024-11-19 16:42:20.238177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.238204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.238316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.238342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.238428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.238455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.238542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.238568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.238668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.238708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.238863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.238891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.239034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.239060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.239211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.239239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.239330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.239362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.239445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.239473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.239590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.239619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.239738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.239764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.239876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.239902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.239997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.240022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.240122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.240149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.240229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.240255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.240331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.240357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.240462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.240488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.240609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.240636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.240746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.240774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.240860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.240886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.241035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.241062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.241175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.241202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.241286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.241313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.241431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.241458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.241562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.241590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.241702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.241729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.241849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.241876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.242016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.242042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.242165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.242192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.242277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.242303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.242411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.242437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.242573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.125 [2024-11-19 16:42:20.242599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.125 qpair failed and we were unable to recover it. 00:36:30.125 [2024-11-19 16:42:20.242712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.242738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.242850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.242877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.242989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.243028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.243133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.243173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.243294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.243322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.243436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.243462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.243605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.243631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.243715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.243742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.243832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.243861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.243977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.244004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.244100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.244131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.244289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.244328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.244446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.244485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.244734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.244797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.244961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.244990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.245081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.245106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.245202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.245229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.245362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.245424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.245564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.245590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.245700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.245727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.245851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.245877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.246017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.246044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.246173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.246202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.246346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.246375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.246489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.246515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.246611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.246639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.246745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.246772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.246898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.246937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.247052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.247088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.247209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.247236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.247328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.247354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.247465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.247490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.247569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.247595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.247683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.247719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.247850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.247880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.248014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.248054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.248210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.248239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.126 [2024-11-19 16:42:20.248324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.126 [2024-11-19 16:42:20.248352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.126 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.248465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.248492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.248572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.248600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.248752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.248805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.248894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.248923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.249036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.249076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.249194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.249221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.249305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.249332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.249563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.249629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.249857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.249898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.250061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.250098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.250191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.250215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.250328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.250354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.250459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.250485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.250567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.250593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.250821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.250876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.250953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.250979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.251092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.251119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.251261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.251288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.251373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.251399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.251484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.251511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.251623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.251649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.251771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.251797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.251912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.251938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.252019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.252048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.252176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.252203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.252316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.252342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.252521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.252594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.252885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.252950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.253103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.253131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.253274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.253301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.253434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.253480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.253640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.253733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.253914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.253983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.254076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.254104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.254216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.254243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.254363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.254389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.254482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.254510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.254778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.254845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.255032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.127 [2024-11-19 16:42:20.255060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.127 qpair failed and we were unable to recover it. 00:36:30.127 [2024-11-19 16:42:20.255184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.255211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.255314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.255341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.255435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.255462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.255570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.255597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.255767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.255847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.255974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.256003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.256130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.256168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.256280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.256306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.256449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.256476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.256590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.256616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.256728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.256755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.256839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.256865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.257001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.257041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.257162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.257192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.257312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.257339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.257421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.257449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.257592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.257620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.257732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.257759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.257878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.257907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.258028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.258057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.258180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.258207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.258299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.258327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.258487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.258515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.258776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.258813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.258961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.258998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.259149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.259176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.259290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.259318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.259429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.259456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.259735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.259802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.260002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.260030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.260163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.260190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.260307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.260334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.260536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.260568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.260687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.260714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.260886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.260912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.260989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.261016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.261128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.261154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.261246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.261272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.261391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.261417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.261501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.261528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.261636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.128 [2024-11-19 16:42:20.261662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.128 qpair failed and we were unable to recover it. 00:36:30.128 [2024-11-19 16:42:20.261839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.261915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.262153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.262180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.262335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.262411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.262709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.262775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.263042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.263122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.263272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.263299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.263418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.263466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.263692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.263756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.263978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.264005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.264098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.264125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.264280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.264306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.264450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.264477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.264668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.264740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.265052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.265138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.265250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.265277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.265369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.265397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.265632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.265695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.265960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.266012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.266179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.266207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.266298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.266324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.266547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.266618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.266909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.266983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.267220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.267247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.267373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.267417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.267578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.267641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.267850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.267914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.268147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.268192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.268416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.268494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.268744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.268809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.269103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.269169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.269429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.269492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.269763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.269824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.270022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.270129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.270376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.270440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.270631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.270698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.270975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.271028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.271278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.271343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.271588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.271653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.271940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.272014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.129 [2024-11-19 16:42:20.272278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.129 [2024-11-19 16:42:20.272345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.129 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.272595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.272659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.272852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.272918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.273182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.273247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.273497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.273562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.273810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.273862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.274039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.274121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.274362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.274427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.274630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.274698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.274912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.274978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.275280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.275346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.275593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.275658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.275948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.276022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.276274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.276338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.276556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.276621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.276860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.276898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.277052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.277098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.277350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.277417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.277712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.277785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.278055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.278133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.278337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.278374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.278555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.278613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.278894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.278959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.279260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.279326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.279587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.279624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.279806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.279879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.280170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.280235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.280492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.280557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.130 qpair failed and we were unable to recover it. 00:36:30.130 [2024-11-19 16:42:20.280859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.130 [2024-11-19 16:42:20.280920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.281172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.281237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.281498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.281564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.281791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.281857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.282168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.282245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.282564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.282630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.282887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.282950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.283177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.283243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.283533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.283598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.283853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.283917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.284167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.284234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.284489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.284556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.284825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.284877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.285088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.285142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.285428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.285494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.285696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.285760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.286016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.286138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.286395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.286434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.286597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.286635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.286866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.286930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.287179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.287246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.287483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.287548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.287807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.287871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.288170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.288237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.288530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.288596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.288888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.288952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.289207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.289273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.289573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.289639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.289940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.289977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.290167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.290206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.290329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.290368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.290497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.290535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.290686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.290724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.290912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.290948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.291078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.291115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.291227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.291263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.291383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.291420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.291547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.291582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.291729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.291765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.291885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.291921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.131 [2024-11-19 16:42:20.292026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.131 [2024-11-19 16:42:20.292063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.131 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.292214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.292250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.292516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.292580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.292741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.292785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.292963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.293029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.293311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.293388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.293595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.293660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.293902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.293967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.294191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.294244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.294403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.294481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.294689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.294749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.294949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.295010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.295299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.295365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.295661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.295726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.295986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.296052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.296276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.296343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.296575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.296640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.296897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.296963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.297196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.297262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.297481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.297550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.297772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.297840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.298139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.298205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.298488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.298553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.298806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.298874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.299100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.299165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.299354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.299419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.299719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.299785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.300033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.300114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.300378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.300444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.300698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.300762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.300972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.301040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.301336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.301402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.301661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.301728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.301975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.302041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.302299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.302366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.302616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.302681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.302883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.302949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.303228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.303295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.303543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.303607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.303880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.303945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.304242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.304310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.304551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.304616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.304906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.304971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.305216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.132 [2024-11-19 16:42:20.305283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.132 qpair failed and we were unable to recover it. 00:36:30.132 [2024-11-19 16:42:20.305493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.305577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.305822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.305887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.306183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.306251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.306516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.306581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.306833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.306898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.307148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.307216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.307461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.307527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.307756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.307819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.308041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.308145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.308390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.308456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.308752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.308818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.309082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.309149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.309371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.309436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.309655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.309720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.310050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.310150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.310363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.310424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.310634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.310699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.310951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.311016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.311320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.311384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.311612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.311677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.311934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.312001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.312244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.312310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.312571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.312636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.312887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.312953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.313207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.313275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.313584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.313649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.313865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.313931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.314144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.314223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.314522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.314587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.314895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.314959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.315203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.315269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.315537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.315602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.315865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.315929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.316227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.316294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.316557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.316622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.316843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.316908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.317194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.317260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.317463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.317532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.317749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.317816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.318113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.318181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.318378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.318447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.318746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.318811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.319097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.319165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.319435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.319499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.133 qpair failed and we were unable to recover it. 00:36:30.133 [2024-11-19 16:42:20.319746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.133 [2024-11-19 16:42:20.319813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.320029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.320109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.320315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.320379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.320649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.320714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.320971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.321037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.321312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.321377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.321598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.321662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.321953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.322018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.322296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.322361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.322579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.322644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.322914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.322980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.323294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.323361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.323609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.323677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.323897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.323965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.324240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.324306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.324517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.324584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.324793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.324859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.325126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.325191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.325406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.325471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.325679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.325748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.326007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.326105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.326321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.326389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.326609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.326675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.326963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.327040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.327283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.327349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.327597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.327664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.327909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.327974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.328185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.328251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.328463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.328529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.328745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.328811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.329096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.329162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.329436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.329500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.329717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.329781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.329996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.330063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.330376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.330440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.330651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.330716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.330964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.331030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.331269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.134 [2024-11-19 16:42:20.331334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.134 qpair failed and we were unable to recover it. 00:36:30.134 [2024-11-19 16:42:20.331624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.331689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.331907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.331974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.332234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.332300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.332558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.332626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.332911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.332977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.333289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.333355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.333616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.333681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.333933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.333999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.334291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.334359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.334582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.334647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.334895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.334961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.335237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.335304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.335615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.335681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.335881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.335947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.336202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.336268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.336500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.336565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.336757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.336823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.337065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.337145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.337364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.337430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.337686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.337751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.338035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.338115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.338323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.338388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.338648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.338711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.338951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.339015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.339277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.339344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.339639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.339714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.339978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.340044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.340315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.340380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.340635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.340702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.340950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.341018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.341291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.341358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.341567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.341632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.341928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.341994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.342281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.342347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.342559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.342624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.342830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.342898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.343113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.343179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.343396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.343461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.343722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.343787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.344014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.344094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.344360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.344425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.344637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.344702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.344931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.344995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.345231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.135 [2024-11-19 16:42:20.345298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.135 qpair failed and we were unable to recover it. 00:36:30.135 [2024-11-19 16:42:20.345518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.345583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.345842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.345907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.346163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.346229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.346520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.346585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.346842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.346907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.347160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.347226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.347456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.347520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.347776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.347841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.348120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.348187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.348445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.348510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.348761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.348825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.349129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.349196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.349411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.349476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.349692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.349757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.349975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.350041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.350317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.350384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.350639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.350704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.350904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.350972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.351268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.351336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.351636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.351701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.351955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.352019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.352284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.352371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.352583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.352648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.352886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.352951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.353237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.353303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.353562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.353628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.353899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.353962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.354216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.354282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.354470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.354538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.354840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.354905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.355164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.355231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.355480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.355548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.355838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.355904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.356167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.356235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.356498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.356562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.356867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.356932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.357172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.357238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.357491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.357556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.357824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.357889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.358148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.358215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.358459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.358525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.358742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.358810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.359097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.359165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.136 [2024-11-19 16:42:20.359387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.136 [2024-11-19 16:42:20.359452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.136 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.359746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.359811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.360092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.360159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.360424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.360490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.360686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.360751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.360967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.361034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.361341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.361407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.361703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.361767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.362006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.362100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.362358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.362423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.362704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.362769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.362972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.363038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.363323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.363388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.363586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.363654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.363894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.363960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.364187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.364254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.364504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.364570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.364788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.364854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.365088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.365165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.365415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.365481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.365703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.365768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.366027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.366122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.366341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.366406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.366598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.366664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.366910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.366977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.367256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.367322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.367547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.367612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.367854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.367921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.368188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.368256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.368515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.368581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.368881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.368946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.369166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.369233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.369514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.369580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.369803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.369867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.370136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.370203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.370394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.370462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.370677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.370743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.370948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.371014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.371249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.371315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.371577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.371641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.371868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.371934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.372180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.372247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.372455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.372520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.372733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.372799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.137 [2024-11-19 16:42:20.373032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.137 [2024-11-19 16:42:20.373111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.137 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.373340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.373407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.373698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.373764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.373988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.374057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.374329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.374395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.374645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.374713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.374930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.374996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.375224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.375292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.375548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.375613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.375855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.375920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.376189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.376256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.376474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.376541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.376754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.376820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.377032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.377127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.377391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.377466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.377661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.377726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.377942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.378011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.378299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.378365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.378571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.378635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.378921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.378984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.379246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.379314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.379516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.379581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.379802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.379869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.380123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.380191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.380448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.380514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.380761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.380827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.381098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.381164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.381431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.381496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.381708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.381773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.382032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.382125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.382364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.382433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.382681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.382748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.382974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.383039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.383348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.383414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.383598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.383663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.383925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.383991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.384216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.384282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.384547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.384622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.384843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.384909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.138 qpair failed and we were unable to recover it. 00:36:30.138 [2024-11-19 16:42:20.385164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.138 [2024-11-19 16:42:20.385232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.385521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.385585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.385859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.385925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.386145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.386211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.386468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.386532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.386752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.386818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.387118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.387184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.387400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.387466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.387668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.387733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.387979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.388047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.388319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.388387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.388641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.388706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.388910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.388975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.389196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.389263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.389557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.389622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.389879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.389957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.390258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.390326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.390551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.390615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.390827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.390894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.391111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.391181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.391430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.391495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.391765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.391831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.139 [2024-11-19 16:42:20.392052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.139 [2024-11-19 16:42:20.392134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.139 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.392382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.392449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.392683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.392750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.392974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.393040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.393352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.393417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.393685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.393751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.394013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.394092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.394335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.394399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.394710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.394775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.395028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.395110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.395352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.395417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.395703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.395769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.396084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.396152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.396412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.396476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.396784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.396850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.397118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.397184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.397379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.397444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.397689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.397754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.397938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.398002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.398247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.398315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.398541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.398606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.398824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.398893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.399195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.399261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.399515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.399581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.399880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.399946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.400152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.400218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.400510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.400574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.400803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.400866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.401113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.416 [2024-11-19 16:42:20.401181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.416 qpair failed and we were unable to recover it. 00:36:30.416 [2024-11-19 16:42:20.401430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.401495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.401713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.401781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.402134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.402200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.402459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.402524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.402785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.402862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.403135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.403202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.403430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.403495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.403786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.403852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.404104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.404177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.404467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.404532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.404729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.404793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.405100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.405174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.405392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.405457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.405718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.405784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.406110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.406181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.406438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.406504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.406761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.406827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.407118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.407185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.407498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.407564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.407759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.407822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.408117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.408184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.408506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.408571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.408784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.408847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.409145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.409211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.409429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.409497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.409800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.409865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.410132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.410198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.410414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.410482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.410725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.410790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.411049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.411147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.411454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.411520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.411822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.411886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.412100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.412178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.412468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.412533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.412790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.412851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.413103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.413177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.413411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.413478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.417 qpair failed and we were unable to recover it. 00:36:30.417 [2024-11-19 16:42:20.413781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.417 [2024-11-19 16:42:20.413849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.414166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.414233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.414498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.414566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.414790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.414855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.415125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.415193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.415430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.415496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.415771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.415836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.416103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.416189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.416496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.416560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.416816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.416883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.417136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.417203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.417434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.417502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.417752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.417818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.418009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.418093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.418364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.418429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.418717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.418782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.419088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.419156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.419463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.419527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.419828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.419893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.420188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.420255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.420564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.420635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.420946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.421012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.421223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.421290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.421573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.421638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.421911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.421976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.422259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.422325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.422633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.422698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.422945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.423010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.423320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.423385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.423673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.423738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.424007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.424086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.424293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.424370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.424659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.424726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.424989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.425057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.425400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.425466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.425733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.425798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.426014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.426097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.426344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.418 [2024-11-19 16:42:20.426419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.418 qpair failed and we were unable to recover it. 00:36:30.418 [2024-11-19 16:42:20.426714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.426779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.427105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.427171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.427472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.427542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.427838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.427903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.428168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.428243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.428541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.428606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.428817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.428880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.429108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.429175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.429477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.429541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.429799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.429876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.430171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.430239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.430551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.430614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.430868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.430934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.431201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.431269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.431561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.431625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.431885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.431952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.432220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.432287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.432681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.432746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.433030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.433117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.433385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.433450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.433734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.433799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.434044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.434128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.434397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.434465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.434744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.434812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.435058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.435157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.435413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.435478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.435772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.435837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.436038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.436121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.436383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.436448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.436745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.436810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.437109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.437175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.437474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.437538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.437794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.437858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.438119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.438186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.438449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.438513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.438797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.438864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.439111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.439180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.419 [2024-11-19 16:42:20.439392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.419 [2024-11-19 16:42:20.439459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.419 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.439718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.439783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.440040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.440123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.440426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.440491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.440736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.440802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.441057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.441137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.441385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.441457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.441718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.441785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.442031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.442114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.442415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.442479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.442778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.442843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.443057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.443151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.443363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.443441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.443747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.443812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.444118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.444186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.444403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.444469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.444698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.444763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.445013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.445093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.445354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.445419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.445652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.445720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.445978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.446043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.446312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.446378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.446665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.446731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.447000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.447064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.447404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.447468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.447722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.447786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.448043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.448127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.448437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.448502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.448746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.448812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.449103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.449172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.449415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.449479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.449673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.420 [2024-11-19 16:42:20.449737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.420 qpair failed and we were unable to recover it. 00:36:30.420 [2024-11-19 16:42:20.449949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.450017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.450351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.450416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.450672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.450737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.451031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.451112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.451375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.451439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.451689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.451758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.451964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.452030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.452293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.452360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.452581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.452649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.452944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.453010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.453324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.453390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.453689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.453756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.454010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.454090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.454338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.454403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.454708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.454773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.455023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.455123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.455436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.455501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.455767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.455832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.456125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.456191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.456441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.456506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.456789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.456865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.457162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.457227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.457520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.457586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.457888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.457954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.458255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.458322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.458591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.458656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.458957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.459023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.459285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.459349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.459631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.459697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.459892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.459960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.460228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.460295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.460541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.460606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.460906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.460972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.461244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.461310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.461543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.461609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.461901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.461967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.462293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.462359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.421 [2024-11-19 16:42:20.462581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.421 [2024-11-19 16:42:20.462647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.421 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.462933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.462998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.463311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.463377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.463579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.463646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.463905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.463970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.464288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.464355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.464600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.464672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.464942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.465009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.465274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.465341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.465646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.465711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.465974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.466039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.466298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.466366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.466630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.466695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.466924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.466988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.467264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.467342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.467572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.467637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.467829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.467894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.468095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.468174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.468466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.468532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.468782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.468848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.469047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.469136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.469431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.469496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.469740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.469805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.470038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.470127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.470420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.470485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.470687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.470754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.471011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.471107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.471417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.471481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.471792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.471857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.472100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.472167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.472369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.472436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.472738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.472803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.473005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.473084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.473385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.473450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.473706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.473770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.474022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.474109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.474403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.474468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.474767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.474833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.422 qpair failed and we were unable to recover it. 00:36:30.422 [2024-11-19 16:42:20.475123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.422 [2024-11-19 16:42:20.475189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.475422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.475487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.475772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.475838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.476101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.476168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.476482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.476547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.476759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.476825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.477139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.477206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.477453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.477520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.477743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.477811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.478090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.478164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.478435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.478500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.478795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.478861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.479055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.479164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.479470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.479537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.479792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.479858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.480160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.480226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.480487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.480552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.480838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.480903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.481136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.481202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.481489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.481555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.481848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.481914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.482165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.482232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.482497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.482562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.482859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.482924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.483185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.483252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.483517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.483583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.483906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.483970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.484252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.484317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.484571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.484637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.484932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.484998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.485267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.485333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.485623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.485688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.485941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.486006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.486241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.486307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.486565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.486630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.486944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.487008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.487330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.487397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.487712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.423 [2024-11-19 16:42:20.487777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.423 qpair failed and we were unable to recover it. 00:36:30.423 [2024-11-19 16:42:20.488022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.488103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.488370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.488435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.488731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.488796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.489100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.489167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.489412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.489477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.489765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.489830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.490133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.490199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.490466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.490530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.490772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.490838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.491129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.491195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.491410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.491475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.491757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.491823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.492082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.492155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.492455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.492518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.492827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.492903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.493207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.493275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.493571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.493635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.493882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.493946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.494251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.494318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.494622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.494686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.494930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.494997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.495245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.495312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.495615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.495681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.495974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.496040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.496304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.496371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.496670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.496736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.496983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.497048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.497313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.497390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.497637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.497706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.497998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.498062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.498339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.498404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.498652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.498720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.499043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.499143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.499396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.499463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.499717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.499782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.500024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.500107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.500419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.500484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.500745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.424 [2024-11-19 16:42:20.500813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.424 qpair failed and we were unable to recover it. 00:36:30.424 [2024-11-19 16:42:20.501117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.501183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.501468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.501533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.501790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.501855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.502145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.502210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.502470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.502536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.502779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.502845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.503093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.503169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.503387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.503453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.503701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.503773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.504065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.504146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.504412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.504477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.504765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.504831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.505146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.505213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.505471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.505538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.505827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.505900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.506170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.506239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.506533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.506609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.506852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.506917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.507205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.507271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.507591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.507657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.507941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.508005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.508275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.508342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.508629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.508694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.508885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.508949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.509188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.509253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.509561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.509638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.510163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.510229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.510481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.510546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.510846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.510911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.511163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.511231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.511535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.511600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.511892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.511958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.512239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.425 [2024-11-19 16:42:20.512318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.425 qpair failed and we were unable to recover it. 00:36:30.425 [2024-11-19 16:42:20.512584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.512650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.512948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.513014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.513278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.513356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.513616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.513681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.513927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.513994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.514309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.514375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.514635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.514700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.514992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.515057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.515396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.515461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.515712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.515778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.516109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.516176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.516449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.516513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.516815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.516881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.517173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.517241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.517492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.517557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.517795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.517849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.517992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.518027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.518173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.518208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.518359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.518394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.518568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.518640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.518895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.518962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.519272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.519338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.519547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.519611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.519867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.519942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.520238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.520304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.520548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.520615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.520875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.520940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.521224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.521291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.521513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.521580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.521826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.521902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.522203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.522270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.522528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.522594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.522884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.522949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.523207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.523274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.523515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.523579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.523805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.523869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.524133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.524202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.524529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.426 [2024-11-19 16:42:20.524594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.426 qpair failed and we were unable to recover it. 00:36:30.426 [2024-11-19 16:42:20.524842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.524920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.525175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.525242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.525494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.525560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.525818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.525885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.526135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.526201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.526485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.526550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.526741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.526818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.527119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.527186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.527490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.527555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.527849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.527914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.528222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.528288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.528613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.528678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.528943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.529008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.529274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.529341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.529594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.529659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.529915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.529988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.530263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.530329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.530522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.530586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.530831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.530910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.531172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.531241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.531492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.531560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.531817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.531881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.532182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.532249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.532507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.532572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.532819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.532883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.533152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.533229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.533543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.533608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.533804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.533870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.534163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.534229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.534535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.534600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.534839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.534906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.535201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.535267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.535526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.535591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.535822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.535886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.536139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.536206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.536442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.536518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.536780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.536845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.427 [2024-11-19 16:42:20.537156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.427 [2024-11-19 16:42:20.537222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.427 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.537482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.537555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.537875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.537940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.538240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.538306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.538568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.538633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.538825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.538890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.539153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.539218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.539421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.539490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.539809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.539874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.540126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.540192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.540484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.540549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.540800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.540866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.541153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.541219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.541445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.541509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.541769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.541846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.542106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.542172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.542465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.542530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.542780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.542848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.543059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.543158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.543389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.543454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.543691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.543756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.544025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.544107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.544349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.544414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.544622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.544687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.544904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.544969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.545253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.545319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.545561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.545627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.545858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.545938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.546190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.546232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.546354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.546391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.546539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.546575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.546700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.546734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.546939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.547021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.547196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.547233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.547383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.547419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.547547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.547582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.547850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.547915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.548149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.428 [2024-11-19 16:42:20.548202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.428 qpair failed and we were unable to recover it. 00:36:30.428 [2024-11-19 16:42:20.548344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.548416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.548624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.548693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.548949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.549029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.549245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.549282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.549495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.549549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.549715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.549771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.549932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.549997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.550211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.550248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.550440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.550514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.550760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.550827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.551112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.551149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.551300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.551336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.551529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.551565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.551683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.551747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.551994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.552046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.552208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.552244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.552367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.552404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.552639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.552706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.553015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.553083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.553257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.553294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.553477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.553530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.553720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.553773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.553926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.553980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.554163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.554200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.554355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.554390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.554519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.554556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.554720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.554787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.554984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.555049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.555248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.555285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.555408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.555444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.555599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.555660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.555871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.555935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.556155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.556193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.556306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.556341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.556488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.556524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.556640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.556678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.556888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.429 [2024-11-19 16:42:20.556954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.429 qpair failed and we were unable to recover it. 00:36:30.429 [2024-11-19 16:42:20.557171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.557209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.557348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.557414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.557588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.557640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.557876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.557929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.558114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.558150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.558290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.558326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.558458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.558494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.558775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.558854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.559132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.559169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.559301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.559338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.559594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.559647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.559862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.559915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.560134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.560171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.560285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.560321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.560467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.560511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.560764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.560830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.561090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.561146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.561262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.561298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.561500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.561554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.561724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.561777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.561953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.562006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.562185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.562222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.562381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.562417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.562565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.562602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.562864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.562930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.563174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.563211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.563361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.563398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.563544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.563580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.563697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.563733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.563932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.563986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.564175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.564230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.564474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.564540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.564735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.564803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.565119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.430 [2024-11-19 16:42:20.565181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.430 qpair failed and we were unable to recover it. 00:36:30.430 [2024-11-19 16:42:20.565335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.565388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.565665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.565741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.565954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.566007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.566186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.566241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.566415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.566468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.566664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.566717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.566931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.566985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.567206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.567260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.567506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.567573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.567865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.567901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.568017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.568052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.568243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.568297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.568495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.568548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.568780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.568833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.569044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.569114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.569360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.569396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.569542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.569578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.569701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.569736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.569856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.569892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.570065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.570160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.570349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.570416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.570669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.570735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.570995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.571061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.571339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.571408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.571703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.571740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.571917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.571985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.572266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.572333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.572541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.572607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.572895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.572961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.573197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.573266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.573514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.573581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.573791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.573859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.574066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.574154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.574357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.574423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.574649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.574716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.574951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.574987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.575103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.575140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.431 [2024-11-19 16:42:20.575280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.431 [2024-11-19 16:42:20.575316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.431 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.575474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.575509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.575774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.575859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.576114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.576151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.576267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.576303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.576426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.576463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.576616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.576651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.576914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.576954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.577108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.577144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.577278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.577313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.577531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.577597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.577888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.577954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.578214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.578282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.578537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.578602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.578893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.578961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.579238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.579307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.579518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.579586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.579841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.579908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.580175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.580244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.580444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.580514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.580725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.580794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.581051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.581151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.581412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.581478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.581695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.581763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.581976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.582045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.582315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.582381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.582595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.582662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.582917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.582983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.583259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.583326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.583606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.583673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.583965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.584041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.584328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.584395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.584610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.584677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.584932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.584997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.585245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.585312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.585553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.585618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.585885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.585954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.586217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.432 [2024-11-19 16:42:20.586285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.432 qpair failed and we were unable to recover it. 00:36:30.432 [2024-11-19 16:42:20.586580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.586648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.586911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.586977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.587230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.587297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.587554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.587623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.587921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.587997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.588285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.588354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.588611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.588676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.588982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.589048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.589278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.589346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.589552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.589619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.589842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.589910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.590184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.590254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.590553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.590620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.590831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.590898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.591191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.591258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.591573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.591639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.591894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.591963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.592179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.592246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.592489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.592557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.592866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.592933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.593196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.593263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.593561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.593627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.593942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.594009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.594277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.594345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.594664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.594731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.594988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.595055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.595335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.595401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.595663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.595730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.596023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.596119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.596415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.596481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.596705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.596770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.597034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.597133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.597349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.597416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.597701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.597767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.598021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.598105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.598361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.598428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.598634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.598700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.433 [2024-11-19 16:42:20.598968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.433 [2024-11-19 16:42:20.599045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.433 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.599327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.599395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.599636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.599701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.599918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.599994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.600239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.600308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.600556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.600623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.600850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.600916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.601181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.601260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.601547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.601614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.601829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.601898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.602156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.602223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.602438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.602515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.602747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.602815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.603084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.603151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.603405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.603471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.603698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.603768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.603981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.604047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.604323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.604389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.604607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.604673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.604920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.604986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.605274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.605342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.605586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.605652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.605878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.605945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.606182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.606250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.606565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.606630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.606879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.606951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.607214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.607282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.607496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.607560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.607862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.607929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.608219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.608287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.608529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.608597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.608905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.608971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.434 qpair failed and we were unable to recover it. 00:36:30.434 [2024-11-19 16:42:20.609205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.434 [2024-11-19 16:42:20.609273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.609536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.609602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.609903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.609980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.610248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.610315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.610533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.610599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.610836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.610903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.611129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.611197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.611490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.611555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.611767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.611835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.612111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.612179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.612442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.612507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.612754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.612821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.613065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.613160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.613429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.613496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.613742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.613809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.614039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.614132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.614433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.614499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.614763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.614831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.615100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.615168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.615388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.615454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.615747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.615812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.616103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.616170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.616430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.616508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.616789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.616856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.617105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.617173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.617366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.617433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.617656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.617722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.617928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.617995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.618272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.618340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.618605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.618671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.618942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.619009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.619315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.619382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.619633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.619699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.619962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.620028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.620344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.620409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.620672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.620739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.621030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.621115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.621380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.435 [2024-11-19 16:42:20.621449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.435 qpair failed and we were unable to recover it. 00:36:30.435 [2024-11-19 16:42:20.621764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.621835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.622041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.622138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.622343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.622411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.622629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.622696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.622911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.622980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.623302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.623369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.623602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.623667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.623971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.624037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.624333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.624403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.624677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.624744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.624995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.625061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.625349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.625416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.625665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.625734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.625969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.626035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.626270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.626336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.626562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.626628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.626884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.626962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.627195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.627273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.627527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.627595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.627797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.627864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.628103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.628171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.628401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.628469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.628762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.628829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.629095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.629184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.629402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.629467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.629768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.629834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.630101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.630171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.630403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.630469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.630734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.630801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.631000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.631066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.631363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.631429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.631653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.631720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.631975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.632044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.632355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.632420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.632679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.632745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.633047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.633144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.633399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.633466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.633772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.436 [2024-11-19 16:42:20.633839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.436 qpair failed and we were unable to recover it. 00:36:30.436 [2024-11-19 16:42:20.634114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.634187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.634448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.634514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.634765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.634834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.635091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.635165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.635400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.635466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.635704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.635770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.636040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.636124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.636375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.636441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.636739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.636805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.637039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.637129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.637376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.637443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.637682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.637748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.638014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.638094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.638365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.638431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.638673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.638740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.638951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.639027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.639306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.639379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.639641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.639708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.639958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.640026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.640288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.640356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.640669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.640735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.641044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.641143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.641438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.641514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.641763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.641831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.642117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.642184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.642404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.642469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.642682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.642748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.642998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.643066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.643322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.643392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.643688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.643753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.644045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.644125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.644386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.644454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.644716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.644781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.645095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.645164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.645379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.645447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.645748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.645814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.646099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.646167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.646458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.437 [2024-11-19 16:42:20.646524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.437 qpair failed and we were unable to recover it. 00:36:30.437 [2024-11-19 16:42:20.646725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.646795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.647057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.647148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.647453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.647519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.647766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.647835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.648132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.648202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.648497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.648563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.648812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.648880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.649133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.649201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.649415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.649491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.649785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.649852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.650108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.650176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.650428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.650494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.650779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.650845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.651148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.651216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.651511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.651577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.651875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.651940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.652200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.652268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.652512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.652578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.652833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.652899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.653200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.653267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.653558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.653624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.653873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.653941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.654230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.654299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.654603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.654670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.654985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.655051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.655376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.655444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.655707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.655776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.656024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.656136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.656428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.656494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.656803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.656869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.657120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.657187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.657433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.657500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.657753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.657822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.658121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.658189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.658489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.658555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.658873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.658940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.659198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.659267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.438 [2024-11-19 16:42:20.659525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.438 [2024-11-19 16:42:20.659593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.438 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.659847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.659914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.660206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.660274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.660528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.660596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.660857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.660923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.661224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.661292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.661606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.661672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.661978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.662044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.662374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.662440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.662732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.662798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.663113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.663181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.663477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.663556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.663834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.663900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.664161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.664231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.664532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.664598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.664895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.664962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.665237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.665306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.665558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.665625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.665859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.665925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.666124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.666192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.666442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.666509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.666759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.666826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.667094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.667172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.667427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.667494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.667717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.667784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.668137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.668206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.668502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.668569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.668830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.668896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.669118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.669186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.669484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.669550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.669820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.669885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.670120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.670188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.670492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.670559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.439 qpair failed and we were unable to recover it. 00:36:30.439 [2024-11-19 16:42:20.670811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.439 [2024-11-19 16:42:20.670877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.671096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.671172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.671429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.671498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.671751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.671817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.672118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.672186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.672501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.672567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.672827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.672894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.673153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.673222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.673519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.673586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.673831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.673899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.674155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.674222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.674513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.674579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.674800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.674865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.675134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.675203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.675457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.675526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.675829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.675895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.676153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.676221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.676520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.676586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.676846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.676928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.677210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.677279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.677521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.677589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.677900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.677966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.678194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.678264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.678554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.678621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.678869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.678936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.679228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.679296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.679598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.679666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.679957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.680023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.680340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.680406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.680702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.680770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.680985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.681053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.681341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.681408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.681717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.681785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.682096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.682165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.682417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.682484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.682729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.682797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.683022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.683107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.683399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.440 [2024-11-19 16:42:20.683465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.440 qpair failed and we were unable to recover it. 00:36:30.440 [2024-11-19 16:42:20.683752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.683818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.684066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.684166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.684454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.684521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.684785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.684851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.685105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.685175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.685476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.685543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.685793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.685859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.686126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.686194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.686424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.686491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.686742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.686809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.687093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.687161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.687396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.687463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.687709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.687775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.688026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.688128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.688425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.688492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.688747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.688814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.689031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.689121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.689382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.689448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.689711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.689777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.690087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.690156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.690445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.690522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.690810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.690877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.691091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.691160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.691432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.691497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.691753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.691820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.692122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.692191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.692442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.692509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.692790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.692857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.693066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.693148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.693347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.693413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.693697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.693763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.694007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.694088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.694387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.694453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.694752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.694818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.695103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.695172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.695471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.695537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.695836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.695903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.696201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.441 [2024-11-19 16:42:20.696268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.441 qpair failed and we were unable to recover it. 00:36:30.441 [2024-11-19 16:42:20.696491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.696557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.696840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.696907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.697175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.697244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.697550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.697617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.697862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.697929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.698151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.698219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.698481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.698547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.698845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.698910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.699203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.699271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.699549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.699616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.699902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.699968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.700236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.700304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.700556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.700622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.700914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.700980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.701224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.701292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.701584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.701650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.701904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.701970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.702288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.702357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.702669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.702735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.703030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.703112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.703411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.703477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.703685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.703755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.703966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.704043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.704387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.704453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.704696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.704762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.705048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.705138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.705402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.705468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.705764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.705831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.706100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.706170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.706422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.706489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.706712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.706778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.707095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.707163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.707411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.707478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.707780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.707847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.708144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.708213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.708484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.708551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.708769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.708837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.709050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.442 [2024-11-19 16:42:20.709135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.442 qpair failed and we were unable to recover it. 00:36:30.442 [2024-11-19 16:42:20.709437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.709503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.709771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.709837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.710135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.710203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.710500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.710566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.710861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.710928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.711226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.711294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.711528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.711593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.711853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.711920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.712121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.712190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.712445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.712510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.712806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.712873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.713172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.713241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.713448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.713514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.713762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.713830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.714122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.714191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.714442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.714510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.714819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.714886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.715129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.715197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.715460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.715527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.715723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.715789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.716115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.716183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.716478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.716544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.716787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.716853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.717153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.717220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.717419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.717499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.717796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.717864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.718167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.718234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.718538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.718605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.718891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.718958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.719274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.719342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.719641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.719709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.719958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.720028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.720325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.720393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.443 qpair failed and we were unable to recover it. 00:36:30.443 [2024-11-19 16:42:20.720620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.443 [2024-11-19 16:42:20.720687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.720936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.721006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.721289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.721358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.721659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.721726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.721979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.722046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.722379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.722446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.722749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.722816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.723088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.723158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.723411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.723478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.723779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.723847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.724139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.724210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.724468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.724535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.724801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.724868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.725144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.725212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.725510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.725576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.725882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.725949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.726245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.726313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.726604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.726670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.726953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.727021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.727333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.727400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.727702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.727768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.728105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.728173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.728399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.728467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.728714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.728780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.729102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.729171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.729425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.729494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.729791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.729857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.730085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.730154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.730457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.730523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.730769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.730834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.731126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.731195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.731391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.731470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.731763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.731829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.732091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.732161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.732464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.732531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.732826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.732893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.733227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.733295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.733595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.444 [2024-11-19 16:42:20.733661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.444 qpair failed and we were unable to recover it. 00:36:30.444 [2024-11-19 16:42:20.733920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.445 [2024-11-19 16:42:20.733986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.445 qpair failed and we were unable to recover it. 00:36:30.445 [2024-11-19 16:42:20.734300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.445 [2024-11-19 16:42:20.734369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.445 qpair failed and we were unable to recover it. 00:36:30.445 [2024-11-19 16:42:20.734553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.445 [2024-11-19 16:42:20.734619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.445 qpair failed and we were unable to recover it. 00:36:30.445 [2024-11-19 16:42:20.734879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.734947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.735251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.735321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.735581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.735647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.735862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.735931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.736212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.736283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.736596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.736662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.736898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.736964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.737184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.737255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.737548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.737615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.737865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.737932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.738198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.738266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.738521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.738587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.720 [2024-11-19 16:42:20.738889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.720 [2024-11-19 16:42:20.738955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.720 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.739270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.739338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.739602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.739669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.739963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.740028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.740360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.740427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.740697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.740764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.740984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.741050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.741321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.741390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.741641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.741710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.742009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.742095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.742351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.742419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.742689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.742756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.743012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.743098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.743393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.743460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.743764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.743830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.744129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.744198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.744455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.744521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.744776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.744843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.745147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.745233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.745459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.745525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.745830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.745897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.746187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.746255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.746552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.746619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.746939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.747006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.747238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.747305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.747559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.747625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.747879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.747946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.748194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.748263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.748569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.748636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.748891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.748957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.749228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.749296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.749509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.749576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.749838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.749905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.750131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.750199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.750499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.750565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.750820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.750885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.751168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.751236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.751485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.721 [2024-11-19 16:42:20.751554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.721 qpair failed and we were unable to recover it. 00:36:30.721 [2024-11-19 16:42:20.751845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.751911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.752163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.752231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.752441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.752508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.752806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.752872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.753171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.753239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.753442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.753509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.753760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.753826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.754098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.754167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.754410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.754477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.754708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.754776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.755085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.755154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.755442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.755508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.755769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.755837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.756122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.756192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.756484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.756551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.756852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.756918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.757180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.757249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.757551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.757618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.757926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.757991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.758297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.758364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.758663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.758741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.758993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.759059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.759379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.759446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.759747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.759814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.760121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.760189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.760485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.760551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.760823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.760892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.761191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.761260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.761558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.761626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.761878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.761945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.762243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.762311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.762607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.762674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.762967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.763033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.763283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.763351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.763621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.763690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.763956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.764022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.764326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.764393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.722 qpair failed and we were unable to recover it. 00:36:30.722 [2024-11-19 16:42:20.764661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.722 [2024-11-19 16:42:20.764727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.765032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.765120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.765376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.765442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.765706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.765773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.766065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.766152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.766443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.766509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.766805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.766873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.767163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.767232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.767469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.767536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.767783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.767851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.768127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.768195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.768489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.768555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.768762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.768829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.769131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.769199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.769489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.769556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.769799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.769868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.770124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.770192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.770445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.770514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.770765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.770834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.771104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.771173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.771462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.771529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.771728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.771795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.772124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.772193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.772495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.772573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.772783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.772852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.773100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.773169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.773431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.773501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.773753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.773820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.774115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.774184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.774477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.774543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.774838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.774905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.775196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.775264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.775514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.775582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.775842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.775908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.776220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.776289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.776578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.776645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.776942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.777009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.777326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.723 [2024-11-19 16:42:20.777393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.723 qpair failed and we were unable to recover it. 00:36:30.723 [2024-11-19 16:42:20.777598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.777664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.777914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.777982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.778244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.778312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.778575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.778642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.778893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.778959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.779257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.779326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.779588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.779654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.779961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.780028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.780305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.780371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.780678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.780744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.781045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.781136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.781333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.781399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.781656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.781722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.782025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.782113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.782404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.782470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.782722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.782791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.783048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.783139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.783390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.783458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.783708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.783775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.784095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.784163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.784356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.784425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.784718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.784785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.785037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.785124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.785387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.785454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.785753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.785820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.786113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.786204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.786461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.786528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.786822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.786889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.787135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.787203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.787514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.787581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.787832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.787898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.788192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.788260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.788513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.788581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.788868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.788935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.789233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.789300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.789595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.789662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.789920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.789987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.724 [2024-11-19 16:42:20.790258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.724 [2024-11-19 16:42:20.790325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.724 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.790556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.790620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.790889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.790953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.791167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.791231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.791443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.791510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.791754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.791821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.792067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.792156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.792406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.792472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.792690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.792759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.793011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.793098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.793395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.793462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.793764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.793831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.794103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.794171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.794431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.794497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.794723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.794790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.795062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.795148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.795434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.795501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.795798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.795865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.796129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.796198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.796412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.796480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.796720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.796788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.797099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.797168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.797478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.797544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.797799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.797865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.798126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.798196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.798487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.798554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.798794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.798861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.799114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.799182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.799441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.799518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.799772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.799840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.800106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.800175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.800384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.725 [2024-11-19 16:42:20.800452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.725 qpair failed and we were unable to recover it. 00:36:30.725 [2024-11-19 16:42:20.800718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.800784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.801037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.801119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.801374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.801440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.801723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.801790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.802096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.802165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.802467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.802535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.802742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.802809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.803063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.803145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.803359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.803426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.803722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.803789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.804128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.804198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.804395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.804463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.804756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.804824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.805037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.805124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.805451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.805518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.805767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.805836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.806105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.806174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.806391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.806459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.806723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.806790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.807035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.807115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.807420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.807487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.807738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.807806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.808104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.808172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.808433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.808500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.808758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.808825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.809044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.809129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.809353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.809420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.809712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.809778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.810093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.810162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.810411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.810480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.810724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.810789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.811030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.811130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.811398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.811465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.726 [2024-11-19 16:42:20.811697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.726 [2024-11-19 16:42:20.811764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.726 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.811977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.812042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.812329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.812395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.812609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.812675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.812907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.812976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.813253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.813321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.813577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.813643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.813891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.813957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.814271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.814339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.814597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.814663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.814922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.814992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.815293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.815361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.815656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.815722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.815939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.816005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.816273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.816340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.816633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.816700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.816991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.817058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.817349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.817417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.817682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.817750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.818040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.818129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.818399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.818465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.818718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.818785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.819042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.819156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.819416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.819482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.819724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.819789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.820045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.820135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.820387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.820455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.820714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.820781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.821029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.821115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.821373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.821439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.821745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.821823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.822109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.822177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.822427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.822494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.822711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.822779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.823033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.823118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.823334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.823400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.823700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.823767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.823983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.727 [2024-11-19 16:42:20.824049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.727 qpair failed and we were unable to recover it. 00:36:30.727 [2024-11-19 16:42:20.824258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.824325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.824619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.824685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.824947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.825015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.825272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.825339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.825601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.825667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.825915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.825982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.826225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.826293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.826583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.826649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.826917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.826983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.827231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.827299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.827561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.827627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.827868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.827934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.828140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.828209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.828439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.828506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.828782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.828848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.829063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.829146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.829398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.829466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.829770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.829836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.830130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.830198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.830507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.830574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.830820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.830885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.831147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.831210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.831452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.831514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.831741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.831803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.831994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.832056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.832347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.832409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.832680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.832741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.833005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.833085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.833286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.833349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.833580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.833641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.833883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.833966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.834265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.834328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.834558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.834635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.834901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.834968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.835221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.835285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.835512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.835573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.835822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.835883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.728 qpair failed and we were unable to recover it. 00:36:30.728 [2024-11-19 16:42:20.836155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.728 [2024-11-19 16:42:20.836219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.836488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.836550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.836753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.836813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.837044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.837120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.837368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.837430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.837711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.837772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.837959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.838020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.838277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.838340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.838540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.838604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.838857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.838919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.839099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.839163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.839377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.839438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.839659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.839727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.839983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.840050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.840378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.840446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.840655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.840733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.841032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.841136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.841353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.841436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.841639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.841706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.841997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.842063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.842338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.842399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.842665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.842731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.842929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.842995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.843332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.843395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.843634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.843695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.843932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.843995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.844243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.844304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.844491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.844555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.844754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.844817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.845019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.845097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.845353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.845414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.845690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.845750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.845988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.846050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.846284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.846348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.846574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.846635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.729 [2024-11-19 16:42:20.846867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.729 [2024-11-19 16:42:20.846939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.729 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.847223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.847286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.847524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.847590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.847802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.847883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.848156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.848218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.848433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.848500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.848754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.848821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.849089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.849170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.849452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.849514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.849752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.849813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.850033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.850111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.850351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.850411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.850655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.850721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.850972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.851054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.851410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.851493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.851741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.851806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.852066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.852165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.852425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.852491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.852701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.852767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.852972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.853039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.853325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.853387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.853685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.853751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.853932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.853998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.854277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.854345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.854546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.854612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.854906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.854971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.855259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.855327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.855604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.855669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.855931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.856000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.856214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.856253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.856381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.856417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.856569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.856605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.856722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.856758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.856898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.730 [2024-11-19 16:42:20.856934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.730 qpair failed and we were unable to recover it. 00:36:30.730 [2024-11-19 16:42:20.857083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.857137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.857283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.857318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.857435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.857481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.857602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.857637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.857807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.857860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.858009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.858045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.858177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.858219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.858350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.858385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.858487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.858521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.858640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.858674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.858783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.858817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.858931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.858965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.859066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.859111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.859233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.859269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.859385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.859419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.859538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.859572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.859693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.859731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.859877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.859912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.860062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.860127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.860303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.860337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.860446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.860480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.860588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.860621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.860725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.860759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.860900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.860933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.861065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.861105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.861208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.861240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.861350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.861383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.861525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.861558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.861658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.861690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.861831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.861863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.862021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.862048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.862182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.862211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.862318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.862359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.862495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.862532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.862732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.862798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.863039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.863132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.863242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.863275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.731 qpair failed and we were unable to recover it. 00:36:30.731 [2024-11-19 16:42:20.863409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.731 [2024-11-19 16:42:20.863441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.863572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.863634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.863884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.863946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.864182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.864219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.864342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.864373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.864504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.864535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.864659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.864688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.864787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.864817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.864924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.864953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.865058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.865094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.865203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.865230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.865326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.865363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.865500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.865528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.865618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.865645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.865730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.865756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.865851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.865878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.865966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.865993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.866085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.866125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.866216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.866243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.866345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.866378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.866461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.866487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.866585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.866613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.866712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.866739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.866845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.866878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.867016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.867047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.867188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.867216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.867342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.867373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.867497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.867526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.867620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.867649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.867742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.867771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.867856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.867884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.867993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.868022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.868144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.868174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.868296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.868326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.868423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.868452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.868548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.868577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.868708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.732 [2024-11-19 16:42:20.868736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.732 qpair failed and we were unable to recover it. 00:36:30.732 [2024-11-19 16:42:20.868837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.868867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.868989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.869019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.869187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.869218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.869321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.869355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.869452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.869479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.869584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.869612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.869703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.869730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.869856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.869884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.869983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.870013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.870117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.870146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.870239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.870269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.870367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.870397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.870497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.870526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.870631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.870660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.870763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.870791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.870890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.870919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.871037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.871065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.871177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.871206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.871327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.871358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.871454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.871483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.871584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.871612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.871709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.871738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.871867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.871896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.872030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.872059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.872161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.872188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.872270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.872300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.872400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.872430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.872566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.872597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.872683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.872711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.872808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.872836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.872961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.872989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.873082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.873121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.873223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.873251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.873359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.873391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.873486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.873515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.873618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.873647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.873756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.873785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.733 [2024-11-19 16:42:20.873921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.733 [2024-11-19 16:42:20.873949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.733 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.874035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.874065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.874183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.874210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.874310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.874340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.874446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.874473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.874565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.874593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.874681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.874708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.874838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.874868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.874961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.874989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.875116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.875145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.875248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.875278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.875413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.875443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.875569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.875599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.875702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.875731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.875851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.875880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.875979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.876008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.876109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.876146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.876239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.876269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.876363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.876393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.876512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.876541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.876627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.876655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.876755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.876783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.876885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.876915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.877028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.877056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.877168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.877197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.877294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.877322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.877419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.877446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.877570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.877597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.877716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.877751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.877854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.877881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.878004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.878033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.878136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.878164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.734 [2024-11-19 16:42:20.878295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.734 [2024-11-19 16:42:20.878323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.734 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.878428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.878456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.878553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.878580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.878709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.878737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.878836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.878865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.878963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.878991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.879132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.879162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.879253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.879282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.879401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.879429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.879536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.879563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.879677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.879706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.879806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.879841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.879964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.879995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.880133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.880163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.880288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.880318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.880419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.880448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.880536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.880566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.880671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.880700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.880799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.880826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.880949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.880977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.881134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.881165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.881254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.881283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.881379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.881406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.881507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.881534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.881627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.881654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.881778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.881808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.881907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.881935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.882055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.882100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.882192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.882221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.882320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.882356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.882453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.882482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.882579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.882607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.882695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.882722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.882832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.882859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.882966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.882996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.883098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.883129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.883225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.883256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.883367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.883397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.735 qpair failed and we were unable to recover it. 00:36:30.735 [2024-11-19 16:42:20.883534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.735 [2024-11-19 16:42:20.883568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.883693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.883724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.883823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.883852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.883963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.883993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.884091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.884124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.884250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.884281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.884408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.884439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.884563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.884593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.884706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.884738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.884877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.884912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.885052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.885093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.885232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.885260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.885387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.885420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.885565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.885598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.885715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.885748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.885959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.885992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.886145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.886174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.886281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.886309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.886494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.886523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.886649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.886679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.886796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.886827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.886941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.886974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.887084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.887129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.887228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.887256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.887388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.887417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.887559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.887590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.887739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.887768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.887890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.887924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.888067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.888126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.888241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.888274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.888381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.888413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.888515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.888572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.888748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.888781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.888889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.888921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.889108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.889137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.889266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.889295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.889467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.889531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.889781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.889814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.736 [2024-11-19 16:42:20.889941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.736 [2024-11-19 16:42:20.889973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.736 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.890107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.890139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.890253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.890298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.890420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.890449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.890619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.890651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.890830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.890888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.891025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.891057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.891171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.891203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.891352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.891384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.891609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.891668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.891886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.891919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.892062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.892100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.892235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.892267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.892535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.892600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.892817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.892850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.892960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.892992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.893176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.893236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.893356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.893388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.893554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.893586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.893891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.893955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.894214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.894280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.894534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.894599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.894861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.894893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.895056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.895111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.895251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.895303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.895628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.895693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.895995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.896027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.896177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.896211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.896540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.896600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.896844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.896909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.897236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.897302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.897492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.897557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.897788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.897821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.897958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.897991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.898193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.898242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.898373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.898401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.898640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.898673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.898809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.898841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.737 qpair failed and we were unable to recover it. 00:36:30.737 [2024-11-19 16:42:20.899097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.737 [2024-11-19 16:42:20.899131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.899245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.899278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.899521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.899585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.899834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.899899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.900185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.900215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.900307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.900336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.900440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.900504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.900850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.900915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.901169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.901235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.901442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.901507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.901752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.901816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.902094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.902159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.902398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.902462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.902733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.902797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.903131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.903198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.903494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.903559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.903847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.903911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.904190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.904224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.904392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.904424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.904685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.904750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.905106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.905172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.905427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.905492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.905782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.905814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.905946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.905980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.906148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.906182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.906439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.906472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.906635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.906668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.906877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.906942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.907143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.907207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.907489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.907553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.907814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.907879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.908190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.908223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.908338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.908371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.908588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.908654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.908892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.908925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.909062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.909101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.909239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.909272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.909534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.909594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.738 [2024-11-19 16:42:20.909814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.738 [2024-11-19 16:42:20.909878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.738 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.910048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.910128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.910441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.910500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.910804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.910869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.911157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.911191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.911333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.911366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.911605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.911669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.911960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.912025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.912339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.912414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.912761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.912793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.912929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.912962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.913245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.913311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.913537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.913596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.913699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.913731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.913871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.913904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.914179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.914245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.914510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.914542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.914650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.914682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.914823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.914855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.915088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.915154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.915401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.915466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.915705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.915738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.915887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.915920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.916190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.916224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.916363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.916397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.916538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.916571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.916777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.916842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.917150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.917211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.917483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.917548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.917757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.917822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.918111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.918177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.918431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.918484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.918615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.918648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.739 [2024-11-19 16:42:20.918937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.739 [2024-11-19 16:42:20.918997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.739 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.919305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.919372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.919625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.919700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.919961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.920026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.920356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.920417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.920713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.920778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.921092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.921158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.921385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.921444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.921672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.921733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.922025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.922108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.922370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.922435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.922723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.922755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.922919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.922951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.923236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.923270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.923378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.923411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.923517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.923549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.923656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.923688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.923826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.923859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.924080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.924113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.924223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.924254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.924445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.924504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.924778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.924811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.924952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.924985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.925111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.925143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.925283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.925315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.925520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.925586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.925871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.925935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.926203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.926264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.926526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.926590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.926889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.926957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.927273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.927334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.927628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.927692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.927955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.928020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.928284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.928349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.928674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.928706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.928877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.928910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.929030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.929063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.929319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.929383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.740 [2024-11-19 16:42:20.929637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.740 [2024-11-19 16:42:20.929670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.740 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.929805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.929838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.929972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.930003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.930237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.930286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.930588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.930646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.930949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.931014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.931257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.931323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.931620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.931653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.931788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.931821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.931956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.931990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.932171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.932230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.932472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.932536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.932789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.932854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.933182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.933248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.933497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.933561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.933754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.933819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.934098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.934164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.934462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.934521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.934771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.934836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.935146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.935180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.935289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.935322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.935463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.935495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.935674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.935706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.935873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.935905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.936014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.936047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.936308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.936341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.936481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.936512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.936796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.936829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.936963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.936996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.937259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.937325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.937521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.937582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.937927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.937960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.938100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.938139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.938305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.938337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.938565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.938629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.938926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.938991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.939240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.939306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.939558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.939622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.939917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.741 [2024-11-19 16:42:20.939981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.741 qpair failed and we were unable to recover it. 00:36:30.741 [2024-11-19 16:42:20.940300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.940361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.940548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.940607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.940821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.940854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.940963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.940996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.941137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.941224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.941452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.941517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.941765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.941825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.942022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.942102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.942372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.942438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.942716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.942749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.942858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.942891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.943052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.943104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.943428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.943493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.943740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.943805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.944054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.944096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.944239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.944272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.944412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.944444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.944611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.944678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.944943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.944976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.945085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.945119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.945253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.945291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.945438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.945471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.945699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.945763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.946063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.946103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.946237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.946270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.946449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.946514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.946781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.946846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.947136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.947170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.947308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.947341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.947489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.947522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.947808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.947840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.947984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.948017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.948158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.948191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.948455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.948488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.948632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.948665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.948867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.948929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.949223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.949289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.949586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.742 [2024-11-19 16:42:20.949651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.742 qpair failed and we were unable to recover it. 00:36:30.742 [2024-11-19 16:42:20.949896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.949962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.950233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.950266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.950392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.950425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.950558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.950592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.950786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.950851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.951155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.951221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.951518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.951550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.951694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.951726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.952009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.952090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.952303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.952340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.952447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.952480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.952702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.952767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.953001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.953065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.953356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.953420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.953706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.953771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.954012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.954094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.954391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.954456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.954696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.954761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.955004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.955106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.955320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.955370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.955570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.955619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.955801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.955849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.956040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.956082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.956255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.956288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.956396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.956429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.956568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.956601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.956698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.956731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.956838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.956871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.956983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.957015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.957140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.957173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.957312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.957344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.957483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.957516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.957652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.957684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.957929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.957994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.958212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.743 [2024-11-19 16:42:20.958262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.743 qpair failed and we were unable to recover it. 00:36:30.743 [2024-11-19 16:42:20.958548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.958581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.958693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.958725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.958931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.959011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.959303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.959352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.959635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.959699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.959981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.960014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.960137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.960170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.960304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.960336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.960442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.960474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.960646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.960709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.960989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.961054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.961330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.961403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.961679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.961758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.962063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.962154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.962313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.962363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.962807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.962898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.963134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.963188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.963423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.963490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.963790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.963848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.964097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.964150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.964360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.964394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.964569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.964601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.964742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.964774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.964912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.964944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.965155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.965205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.965386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.965418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.965546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.965579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.965714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.965747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.965975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.966014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.966157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.966191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.966292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.966325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.966545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.966602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.966766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.966798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.967046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.967135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.967338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.967387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.967684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.744 [2024-11-19 16:42:20.967716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.744 qpair failed and we were unable to recover it. 00:36:30.744 [2024-11-19 16:42:20.967884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.967917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.968086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.968120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.968221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.968253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.968401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.968462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.968779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.968844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.969100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.969150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.969353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.969422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.969743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.969807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.970187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.970236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.970468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.970501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.970617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.970651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.970887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.970920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.971024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.971058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.971244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.971293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.971564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.971596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.971769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.971802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.972042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.972105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.972296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.972329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.972518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.972578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.972864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.972931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.973160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.973212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.973404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.973452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.973743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.973806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.974125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.974175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.974341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.974392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.974588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.974655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.974911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.974976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.975265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.975314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.975603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.975635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.975798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.975864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.976103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.976173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.976388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.976421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.976554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.976587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.976728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.976761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.976956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.977020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.977228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.977277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.977468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.977546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.977877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.745 [2024-11-19 16:42:20.977936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.745 qpair failed and we were unable to recover it. 00:36:30.745 [2024-11-19 16:42:20.978154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.978206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.978468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.978533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.978798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.978850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.978990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.979023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.979249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.979301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.979527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.979575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.979885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.979944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.980236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.980286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.980605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.980638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.980764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.980800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.981056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.981097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.981268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.981301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.981513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.981600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.981908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.981967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.982235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.982300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.982552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.982617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.982869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.982933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.983189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.983256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.983488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.983552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.983814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.983874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.984124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.984191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.984526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.984601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.984850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.984918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.985178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.985211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.985350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.985382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.985489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.985522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.985656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.985688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.985908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.985973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.986182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.986249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.986537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.986569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.986673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.986705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.986868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.986900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.987157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.987224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.987434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.987498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.987756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.987824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.988139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.988200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.988554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.988619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.746 qpair failed and we were unable to recover it. 00:36:30.746 [2024-11-19 16:42:20.988867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.746 [2024-11-19 16:42:20.988940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.989252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.989318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.989623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.989687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.989920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.989953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.990091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.990126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.990368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.990434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.990757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.990822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.991116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.991183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.991388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.991453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.991740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.991805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.992061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.992141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.992370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.992437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.992744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.992804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.993008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.993088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.993336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.993402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.993696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.993756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.994010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.994093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.994317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.994385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.994636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.994702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.994966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.995029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.995356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.995389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.995552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.995585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.995724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.995756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.996030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.996126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.996374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.996452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.996658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.996725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.996995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.997028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.997173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.997206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.997443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.997508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.997806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.997839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.997955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.997988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.998137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.998170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.998383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.998416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.998549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.998581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.998729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.998778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.999045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.999124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.999397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.999462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:20.999724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.747 [2024-11-19 16:42:20.999789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.747 qpair failed and we were unable to recover it. 00:36:30.747 [2024-11-19 16:42:21.000048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.000128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.000429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.000493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.000792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.000856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.001183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.001249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.001547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.001611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.001873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.001933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.002129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.002212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.002465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.002528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.002827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.002891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.003148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.003182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.003298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.003331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.003470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.003503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.003633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.003667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.003947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.004012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.004263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.004329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.004628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.004688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.004962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.005023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.005300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.005364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.005658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.005706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.005907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.005973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.006300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.006366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.006629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.006694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.006938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.007005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.007285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.007351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.007618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.007683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.007968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.008034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.008339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.008414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.008712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.008778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.009067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.009148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.009399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.009466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.009719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.009784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.010088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.010155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.748 [2024-11-19 16:42:21.010415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.748 [2024-11-19 16:42:21.010479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.748 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.010780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.010845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.011107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.011173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.011420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.011484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.011781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.011845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.012144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.012211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.012468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.012528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.012725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.012785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.013046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.013126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.013390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.013455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.013708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.013774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.014034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.014114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.014357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.014421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.014722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.014786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.015033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.015111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.015396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.015461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.015727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.015791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.016049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.016148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.016438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.016502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.016738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.016803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.017026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.017109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.017420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.017485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.017729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.017797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.018108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.018170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.018464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.018530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.018787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.018852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.019158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.019218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.019495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.019560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.019812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.019877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.020171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.020236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.020429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.020495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.020758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.020824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.021121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.021186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.021475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.021540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.021838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.021913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.022159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.022225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.022475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.022540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.022842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.022907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.749 qpair failed and we were unable to recover it. 00:36:30.749 [2024-11-19 16:42:21.023198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.749 [2024-11-19 16:42:21.023263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.023534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.023599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.023869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.023935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.024201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.024261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.024496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.024556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.024867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.024932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.025129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.025195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.025443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.025509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.025705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.025774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.025994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.026061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.026403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.026469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.026718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.026783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.027053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.027131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.027477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.027542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.027846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.027912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.028154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.028222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.028530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.028595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.028894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.028955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.029225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.029290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.029556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.029616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.029814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.029899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.030127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.030192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.030482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.030546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.030763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.030830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.031132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.031193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.031397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.031458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.031634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.031696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.031909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.031968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.032278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.032344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.032630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.032696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.032943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.033009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.033311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.033377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.033583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.033649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.033903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.033967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.034274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.034340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.034593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.034657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.034953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.035028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.035293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.750 [2024-11-19 16:42:21.035358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.750 qpair failed and we were unable to recover it. 00:36:30.750 [2024-11-19 16:42:21.035651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.035715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.035963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.036023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.036257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.036318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.036621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.036680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.036893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.036958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.037264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.037330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.037568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.037634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.037888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.037954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.038208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.038274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.038533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.038596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.038848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.038913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.039223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.039286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:30.751 [2024-11-19 16:42:21.039629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.751 [2024-11-19 16:42:21.039693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:30.751 qpair failed and we were unable to recover it. 00:36:31.023 [2024-11-19 16:42:21.039945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.023 [2024-11-19 16:42:21.040011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.023 qpair failed and we were unable to recover it. 00:36:31.023 [2024-11-19 16:42:21.040350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.023 [2024-11-19 16:42:21.040413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.023 qpair failed and we were unable to recover it. 00:36:31.023 [2024-11-19 16:42:21.040666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.023 [2024-11-19 16:42:21.040724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.023 qpair failed and we were unable to recover it. 00:36:31.023 [2024-11-19 16:42:21.040900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.023 [2024-11-19 16:42:21.040960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.023 qpair failed and we were unable to recover it. 00:36:31.023 [2024-11-19 16:42:21.041217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.023 [2024-11-19 16:42:21.041283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.023 qpair failed and we were unable to recover it. 00:36:31.023 [2024-11-19 16:42:21.041534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.023 [2024-11-19 16:42:21.041599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.023 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.041831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.041896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.042111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.042179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.042394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.042462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.042713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.042778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.042988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.043055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.043329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.043394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.043707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.043773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.044055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.044144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.044356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.044421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.044625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.044692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.044953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.045020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.045293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.045360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.045612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.045676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.045984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.046049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.046315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.046380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.046611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.046677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.046918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.046984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.047219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.047284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.047601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.047666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.047895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.047970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.048224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.048289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.048523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.048588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.048877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.048941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.049197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.049262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.049493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.049557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.049769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.049837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.050097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.050163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.050419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.050484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.050696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.050761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.051011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.051107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.051410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.051474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.051728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.051793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.052050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.052131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.052396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.052464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.052729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.052793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.053014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.053095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.024 [2024-11-19 16:42:21.053397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.024 [2024-11-19 16:42:21.053462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.024 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.053711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.053778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.054084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.054150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.054351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.054416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.054663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.054729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.054928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.054995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.055296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.055363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.055623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.055688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.055994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.056060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.056337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.056402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.056590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.056655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.056883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.056948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.057191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.057256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.057494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.057559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.057797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.057863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.058047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.058127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.058414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.058479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.058695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.058760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.059051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.059130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.059395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.059461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.059752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.059817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.060113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.060178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.060402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.060470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.060669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.060745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.060962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.061028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.061301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.061368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.061581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.061647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.061868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.061933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.062226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.062293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.062589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.062654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.062852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.062918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.063165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.063232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.063479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.063545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.063783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.063848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.064057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.064179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.064384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.064450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.064675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.064740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.065026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.025 [2024-11-19 16:42:21.065107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.025 qpair failed and we were unable to recover it. 00:36:31.025 [2024-11-19 16:42:21.065397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.065462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.065715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.065781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.066112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.066177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.066374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.066440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.066689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.066758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.067012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.067092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.067353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.067418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.067685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.067750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.068040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.068121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.068410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.068475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.068733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.068798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.069023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.069099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.069390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.069491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.069708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.069777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.070039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.070133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.070360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.070425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.070727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.070797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.071017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.071107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.071361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.071430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.071668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.071736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.072004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.072086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.072349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.072413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.072675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.072741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.072962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.073029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.073298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.073364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.073562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.073639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.073852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.073918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.074215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.074280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.074524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.074590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.074844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.074912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.075120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.075186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.075433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.075498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.075731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.075800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.076097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.076163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.076378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.076443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.076662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.076728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.077028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.077106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.077327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.026 [2024-11-19 16:42:21.077392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.026 qpair failed and we were unable to recover it. 00:36:31.026 [2024-11-19 16:42:21.077635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.077701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.077975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.078039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.078316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.078381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.078595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.078664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.078880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.078947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.079205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.079273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.079475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.079541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.079787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.079852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.080152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.080219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.080418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.080484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.080731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.080798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.081005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.081087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.081367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.081433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.081658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.081722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.082005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.082122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.082404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.082473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.082731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.082796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.083025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.083112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.083334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.083400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.083701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.083765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.084018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.084096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.084347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.084408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.084683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.084742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.084992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.085053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.085284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.085346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.085583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.085644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.085995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.086102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.086347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.086418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.086729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.086795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.087012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.087111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.087420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.087486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.087754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.087819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.088055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.088133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.088351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.088411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.088675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.088739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.089022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.089094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.089293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.089355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.089583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.027 [2024-11-19 16:42:21.089647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.027 qpair failed and we were unable to recover it. 00:36:31.027 [2024-11-19 16:42:21.089934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.089998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.090269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.090336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.090583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.090650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.090916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.090982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.091196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.091265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.091482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.091546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.091795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.091862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.092129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.092196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.092413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.092478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.092692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.092758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.093054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.093130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.093325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.093387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.093661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.093721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.093964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.094023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.094239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.094299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.094490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.094551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.094781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.094886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.095118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.095183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.095415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.095480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.095711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.095776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.096084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.096152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.096413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.096477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.096683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.096748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.097005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.097083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.097350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.097414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.097661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.097725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.097948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.098012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.098261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.098327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.028 qpair failed and we were unable to recover it. 00:36:31.028 [2024-11-19 16:42:21.098628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.028 [2024-11-19 16:42:21.098693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.098979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.099043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.099329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.099394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.099599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.099664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.099911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.099977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.100206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.100272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.100510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.100575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.100845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.100909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.101136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.101234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.101550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.101619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.101861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.101927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.102182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.102249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.102459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.102526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.102792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.102856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.103098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.103164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.103454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.103530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.103786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.103850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.104110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.104176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.104397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.104464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.104697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.104762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.105057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.105154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.105377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.105442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.105713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.105777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.106001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.106066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.106346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.106411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.106694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.106758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.107026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.107107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.107380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.107440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.107715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.107774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.108053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.108130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.108366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.108425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.108659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.108719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.108993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.109053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.109344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.109403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.109628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.109688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.109921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.109982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.110274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.110340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.110593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.110659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.029 [2024-11-19 16:42:21.110938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.029 [2024-11-19 16:42:21.110997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.029 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.111231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.111293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.111516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.111576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.111755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.111815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.112027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.112104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.112326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.112387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.112602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.112662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.112847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.112906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.113126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.113188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.113386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.113446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.113732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.113791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.114022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.114094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.114329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.114389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.114603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.114663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.114844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.114904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.115117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.115178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.115370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.115429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.115675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.115734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.116003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.116109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.116403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.116468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.116663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.116724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.116909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.116969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.117223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.117286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.117480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.117541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.117727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.117787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.118032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.118110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.118314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.118373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.118608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.118668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.118904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.118965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.119169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.119232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.119499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.119558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.119795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.119854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.120102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.120164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.120388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.120448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.120690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.120750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.120960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.121020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.121238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.121298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.121531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.121591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.030 [2024-11-19 16:42:21.121833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.030 [2024-11-19 16:42:21.121892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.030 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.122162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.122223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.122446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.122510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.122721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.122785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.123033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.123112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.123309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.123375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.123590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.123655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.123751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1451970 (9): Bad file descriptor 00:36:31.031 [2024-11-19 16:42:21.124137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.124215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.124449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.124516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.124767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.124835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.125098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.125164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.125389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.125455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.125718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.125783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.125982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.126049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.126285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.126351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.126610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.126675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.126895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.126962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.127244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.127305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.127569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.127633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.127817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.127884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.128090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.128156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.128401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.128465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.128754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.128818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.129107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.129174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.129434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.129498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.129692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.129760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.129981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.130046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.130336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.130401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.130651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.130716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.130977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.131040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.131369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.131433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.131683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.131747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.132009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.132088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.132356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.132432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.132642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.132707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.132921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.132984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.133243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.133310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.133598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.031 [2024-11-19 16:42:21.133664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.031 qpair failed and we were unable to recover it. 00:36:31.031 [2024-11-19 16:42:21.133950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.134014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.134285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.134350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.134654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.134719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.134942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.135007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.135235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.135301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.135580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.135640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.135867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.135928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.136168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.136230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.136505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.136586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.136845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.136913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.137186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.137253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.137553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.137617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.137868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.137936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.138231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.138297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.138523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.138589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.138884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.138948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.139203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.139270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.139479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.139543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.139771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.139835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.140053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.140130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.140346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.140413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.140666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.140734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.141004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.141094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.141364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.141424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.141615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.141674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.141897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.141956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.142232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.142292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.142482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.142541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.142725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.142784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.143056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.143129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.143359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.143420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.143693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.143753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.143950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.144012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.144260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.144320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.144512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.144593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.144853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.144928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.145192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.032 [2024-11-19 16:42:21.145257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.032 qpair failed and we were unable to recover it. 00:36:31.032 [2024-11-19 16:42:21.145512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.145576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.145834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.145898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.146175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.146234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.146511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.146570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.146769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.146831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.147067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.147145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.147415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.147475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.147699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.147767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.148011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.148090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.148342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.148401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.148591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.148651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.148925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.148984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.149299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.149360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.149606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.149666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.149872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.149935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.150159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.150220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.150491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.150550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.150792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.150852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.151062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.151134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.151401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.151461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.151693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.151752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.151999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.152059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.152257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.152317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.152530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.152593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.152837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.152901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.153129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.153197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.153402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.153467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.153714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.153779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.153971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.154036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.154352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.154416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.154675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.154739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.155039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.155119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.155379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.033 [2024-11-19 16:42:21.155443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.033 qpair failed and we were unable to recover it. 00:36:31.033 [2024-11-19 16:42:21.155653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.155717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.155974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.156038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.156298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.156363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.156649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.156712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.156953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.157018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.157292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.157368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.157557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.157621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.157910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.157974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.158304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.158370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.158582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.158647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.158893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.158957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.159270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.159336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.159643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.159707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.159915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.159982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.160256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.160321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.160543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.160608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.160856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.160923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.161189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.161255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.161465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.161529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.161805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.161869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.162086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.162152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.162338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.162404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.162660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.162723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.162961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.163025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.163261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.163327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.163619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.163682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.163884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.163949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.164211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.164275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.164491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.164556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.164790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.164854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.165144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.165209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.165512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.165576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.165846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.165911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.166157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.166225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.166519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.166583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.166851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.166915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.167156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.167222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.167465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.034 [2024-11-19 16:42:21.167530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.034 qpair failed and we were unable to recover it. 00:36:31.034 [2024-11-19 16:42:21.167748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.167811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.168030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.168109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.168362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.168426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.168687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.168750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.168969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.169035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.169242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.169309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.169600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.169665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.169910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.169985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.170236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.170300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.170541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.170605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.170820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.170886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.171135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.171203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.171429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.171496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.171761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.171826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.172108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.172174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.172444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.172507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.172757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.172821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.173095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.173160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.173366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.173431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.173721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.173785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.174094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.174160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.174395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.174458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.174704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.174769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.175104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.175169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.175399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.175464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.175736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.175800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.176103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.176167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.176426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.176493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.176792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.176857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.177081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.177148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.177412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.177476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.177784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.177848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.178110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.178175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.178423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.178490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.178786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.178851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.179103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.179168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.179461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.179525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.179781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.035 [2024-11-19 16:42:21.179846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.035 qpair failed and we were unable to recover it. 00:36:31.035 [2024-11-19 16:42:21.180099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.180166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.180458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.180524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.180780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.180844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.181108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.181173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.181423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.181488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.181748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.181812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.182053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.182133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.182375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.182440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.182679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.182744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.182999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.183101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.183362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.183427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.183683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.183752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.184055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.184137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.184434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.184498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.184755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.184820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.185109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.185177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.185441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.185505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.185790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.185854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.186151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.186217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.186463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.186534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.186782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.186850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.187107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.187174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.187435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.187500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.187763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.187828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.188094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.188162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.188406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.188471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.188770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.188835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.189051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.189131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.189402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.189467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.189729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.189794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.190050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.190131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.190434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.190498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.190792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.190857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.191124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.191192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.191442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.191508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.191752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.191816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.192126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.192193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.036 [2024-11-19 16:42:21.192501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.036 [2024-11-19 16:42:21.192566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.036 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.192862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.192927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.193186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.193251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.193499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.193566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.193774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.193839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.194133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.194198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.194503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.194568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.194781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.194846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.195096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.195162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.195443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.195508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.195754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.195818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.196096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.196162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.196418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.196494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.196781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.196845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.197153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.197219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.197477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.197542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.197784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.197849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.198154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.198220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.198473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.198537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.198802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.198867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.199127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.199193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.199445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.199511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.199759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.199826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.200083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.200151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.200409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.200473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.200774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.200838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.201149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.201216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.201459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.201523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.201771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.201835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.202141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.202208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.202506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.202570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.202825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.202890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.203192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.203258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.203514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.203578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.037 qpair failed and we were unable to recover it. 00:36:31.037 [2024-11-19 16:42:21.203817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.037 [2024-11-19 16:42:21.203882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.204184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.204250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.204489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.204554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.204804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.204869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.205170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.205234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.205509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.205575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.205772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.205840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.206096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.206160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.206459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.206525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.206779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.206845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.207150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.207216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.207428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.207493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.207754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.207818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.208108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.208175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.208395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.208463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.208753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.208824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.209130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.209196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.209488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.209553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.209768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.209833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.210117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.210187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.210503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.210578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.210842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.210906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.211207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.211283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.211545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.211609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.211895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.211959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.212274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.212340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.212551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.212615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.212910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.212974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.213248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.213313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.213566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.213630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.213928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.214004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.214324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.214390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.214654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.214717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.214937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.215004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.215303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.215369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.215673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.215748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.215995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.216061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.216344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.038 [2024-11-19 16:42:21.216411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.038 qpair failed and we were unable to recover it. 00:36:31.038 [2024-11-19 16:42:21.216701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.216767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.217064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.217143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.217387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.217452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.217703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.217769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.218065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.218144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.218396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.218463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.218710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.218776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.219042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.219131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.219361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.219429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.219727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.219803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.220091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.220156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.220452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.220528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.220767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.220832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.221085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.221152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.221407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.221472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.221748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.221812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.222059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.222137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.222437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.222502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.222749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.222813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.223117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.223193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.223452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.223516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.223780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.223847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.224148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.224225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.224485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.224553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.224813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.224877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.225130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.225196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.225492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.225567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.225813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.225877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.226169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.226235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.226536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.226610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.226913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.226978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.227301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.227367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.227614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.227678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.227940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.228003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.228270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.228336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.228586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.228654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.228903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.228970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.039 [2024-11-19 16:42:21.229279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.039 [2024-11-19 16:42:21.229345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.039 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.229543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.229608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.229862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.229926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.230194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.230259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.230553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.230618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.230878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.230943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.231232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.231298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.231580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.231644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.231947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.232021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.232333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.232397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.232648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.232724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.232980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.233044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.233357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.233422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.233667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.233731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.233974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.234042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.234256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.234322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.234620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.234684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.234933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.234997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.235287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.235353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.235573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.235641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.235938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.236010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.236329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.236395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.236593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.236661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.236969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.237043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.237317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.237384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.237681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.237757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.238000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.238088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.238297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.238365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.238654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.238729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.239015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.239093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.239345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.239409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.239713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.239785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.240091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.240157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.240405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.240469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.240654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.240720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.240974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.241039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.241344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.241409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.241712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.241776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.040 qpair failed and we were unable to recover it. 00:36:31.040 [2024-11-19 16:42:21.242085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.040 [2024-11-19 16:42:21.242157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.242440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.242509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.242704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.242767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.243003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.243067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.243362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.243427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.243726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.243802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.244054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.244137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.244396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.244461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.244754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.244818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.245089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.245153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.245399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.245463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.245720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.245787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.246088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.246173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.246467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.246533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.246817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.246881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.247135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.247200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.247391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.247456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.247754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.247829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.248067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.248147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.248438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.248502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.248748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.248811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.249049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.249125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.249371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.249436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.249735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.249798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.249999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.250062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.250351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.250415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.250717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.250782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.251024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.251101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.251313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.251379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.251612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.251677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.251921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.251992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.252298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.252364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.252664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.252729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.252975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.253041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.253350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.253417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.253708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.253773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.254033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.254114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.254366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.041 [2024-11-19 16:42:21.254432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.041 qpair failed and we were unable to recover it. 00:36:31.041 [2024-11-19 16:42:21.254719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.254794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.255097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.255163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.255428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.255495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.255793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.255867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.256132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.256197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.256451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.256515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.256819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.256894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.257182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.257247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.257532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.257597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.257859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.257926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.258230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.258296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.258596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.258661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.258959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.259024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.259354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.259418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.259719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.259794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.260097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.260174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.260492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.260556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.260845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.260910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.261214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.261281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.261518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.261583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.261825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.261890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.262142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.262211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.262514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.262588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.262800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.262865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.263155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.263221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.263519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.263583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.263778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.263845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.264136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.264202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.264530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.264595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.264893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.264966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.265221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.265287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.265556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.265622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.265939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.266012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.042 qpair failed and we were unable to recover it. 00:36:31.042 [2024-11-19 16:42:21.266273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.042 [2024-11-19 16:42:21.266342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.266608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.266673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.266978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.267050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.267376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.267445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.267685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.267753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.267993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.268059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.268324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.268391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.268677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.268741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.269024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.269102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.269403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.269478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.269774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.269837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.270097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.270164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.270384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.270451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.270704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.270768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.271064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.271157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.271417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.271482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.271738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.271801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.272098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.272164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.272380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.272445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.272694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.272767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.273089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.273154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.273419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.273500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.273758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.273823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.274094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.274162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.274484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.274555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.274864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.274928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.275169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.275203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.275350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.275387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.275502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.275535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.275735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.275801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.276055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.276152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.276408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.276473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.276773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.276838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.277098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.277164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.277407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.277470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.277741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.277807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.278089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.278154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.278404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.043 [2024-11-19 16:42:21.278467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.043 qpair failed and we were unable to recover it. 00:36:31.043 [2024-11-19 16:42:21.278655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.278719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.278962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.279025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.279365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.279429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.279682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.279745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.279994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.280061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.280374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.280438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.280755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.280818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.281124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.281190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.281483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.281547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.281806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.281872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.282136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.282203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.282462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.282531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.282830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.282894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.283152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.283218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.283516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.283579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.283795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.283860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.284081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.284149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.284442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.284506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.284761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.284826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.285090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.285158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.285458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.285522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.285784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.285849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.286098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.286166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.286420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.286500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.286815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.286879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.287181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.287248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.287542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.287607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.287918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.287981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.288193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.288258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.288551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.288616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.288868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.288932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.289198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.289263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.289477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.289542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.289841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.289905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.290110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.290175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.290436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.290503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.290798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.290862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.291129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.044 [2024-11-19 16:42:21.291194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.044 qpair failed and we were unable to recover it. 00:36:31.044 [2024-11-19 16:42:21.291403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.291467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.291684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.291750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.291972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.292036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.292313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.292376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.292623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.292688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.292900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.292963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.293233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.293300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.293526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.293590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.293804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.293868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.294127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.294194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.294465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.294530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.294743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.294809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.295111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.295177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.295407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.295473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.295716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.295783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.296042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.296142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.296408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.296472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.296699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.296764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.296968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.297032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.297284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.297349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.297618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.297682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.297901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.297966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.298189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.298258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.298516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.298582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.298842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.298905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.299178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.299254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.299468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.299546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.299801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.299865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.300120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.300186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.300418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.300483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.300727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.300791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.301042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.301123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.301384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.301448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.301634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.301698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.301958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.302022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.302254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.302319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.302524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.302589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.045 [2024-11-19 16:42:21.302864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.045 [2024-11-19 16:42:21.302930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.045 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.303174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.303240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.303522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.303590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.303846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.303910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.304131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.304198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.304462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.304527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.304798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.304863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.305121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.305187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.305453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.305518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.305762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.305826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.306017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.306098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.306368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.306432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.306701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.306766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.306994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.307057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.307309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.307375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.307651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.307717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.307977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.308042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.308263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.308326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.308514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.308579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.308819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.308884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.309191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.309257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.309508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.309572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.309784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.309849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.310040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.310120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.310419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.310484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.310751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.310815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.311040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.311127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.311358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.311423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.311712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.311787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.311999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.312062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.312333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.312397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.312624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.312689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.312932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.312996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.313220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.046 [2024-11-19 16:42:21.313284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.046 qpair failed and we were unable to recover it. 00:36:31.046 [2024-11-19 16:42:21.313491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.313556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.313810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.313873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.314121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.314186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.314403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.314470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.314717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.314782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.315015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.315095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.315317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.315382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.315628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.315691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.315953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.316018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.316285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.316349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.316586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.316650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.316850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.316915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.317155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.317221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.317516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.317580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.317895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.317959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.318231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.318296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.318524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.318588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.318822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.318888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.319137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.319202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.319416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.319482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.319744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.319807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.320016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.320095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.320339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.320402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.320696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.320759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.320970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.321035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.321331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.321395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.321705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.321769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.322016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.322092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.322380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.322445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.322707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.322771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.323017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.323096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.323301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.323368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.323673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.323737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.323945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.324012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.324274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.324351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.324604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.324669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.324914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.324978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.047 [2024-11-19 16:42:21.325195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.047 [2024-11-19 16:42:21.325259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.047 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.325512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.325577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.325828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.325893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.326151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.326216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.326474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.326537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.326747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.326812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.327110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.327176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.327480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.327544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.327805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.327868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.328123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.328188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.328390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.328455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.328767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.328833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.329041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.329132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.329366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.329431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.329723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.329787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.330038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.330114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.330325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.330389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.330601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.330666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.330920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.330983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.331209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.331275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.331517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.331580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.331789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.331852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.332041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.332133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.332330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.332393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.332623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.332688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.332939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.333003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.333318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.333384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.333611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.333675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.333986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.334049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.334316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.334379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.334637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.334700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.334953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.335018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.335316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.335380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.335628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.335693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.335954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.336019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.336243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.336306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.336505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.336568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.336824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.336905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.048 qpair failed and we were unable to recover it. 00:36:31.048 [2024-11-19 16:42:21.337141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.048 [2024-11-19 16:42:21.337205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.337416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.337480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.337702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.337767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.337968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.338034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.338305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.338372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.338624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.338689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.338911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.338975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.339291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.339357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.339630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.339693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.339946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.340010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.340336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.340442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.340709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.340779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.341034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.341122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.341396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.341461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.341727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.341792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.342004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.342086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.342338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.342402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.342675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.342740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.342968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.343032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.343296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.343362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.343566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.343630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.343833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.343898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.344181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.344246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.344517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.344581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.344801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.344866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.345138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.049 [2024-11-19 16:42:21.345204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.049 qpair failed and we were unable to recover it. 00:36:31.049 [2024-11-19 16:42:21.345409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.345502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.345805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.345867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.346088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.346148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.346390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.346454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.346677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.346742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.346988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.347052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.347352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.347416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.347666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.347729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.347989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.348053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.348289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.348353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.348612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.322 [2024-11-19 16:42:21.348675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.322 qpair failed and we were unable to recover it. 00:36:31.322 [2024-11-19 16:42:21.348882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.348946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.349159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.349224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.349416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.349484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.349729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.349794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.350126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.350192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.350432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.350495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.350739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.350803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.351056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.351134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.351356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.351425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.351707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.351771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.351994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.352059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.352303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.352368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.352580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.352643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.352867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.352931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.353205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.353265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.353491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.353550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.353768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.353837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.354033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.354110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.354399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.354458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.354724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.354785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.355018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.355091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.355326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.355384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.355597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.355657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.355894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.355952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.356198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.356258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.356573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.356637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.356860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.356923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.357171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.357259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.357517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.357580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.357887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.357951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.358183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.358249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.358495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.358554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.358744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.358803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.359085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.359164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.359411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.359469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.359789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.359853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.360067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.360172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.360397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.323 [2024-11-19 16:42:21.360461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.323 qpair failed and we were unable to recover it. 00:36:31.323 [2024-11-19 16:42:21.360695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.360759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.361002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.361065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.361312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.361377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.361595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.361658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.361949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.362008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.362273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.362334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.362609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.362668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.362851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.362910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.363128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.363190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.363420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.363479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.363694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.363753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.364012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.364118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.364336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.364395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.364623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.364682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.364955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.365019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.365291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.365349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.365587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.365646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.365855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.365935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.366215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.366274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.366520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.366588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.366846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.366911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.367217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.367282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.367558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.367622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.367950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.368014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.368281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.368344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.368584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.368650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.368979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.369044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.369331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.369395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.369713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.369777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.370060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.370153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.370346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.370404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.370610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.370668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.370866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.370926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.371193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.371253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.371526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.371590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.371791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.371856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.372154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.372220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.372477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.372540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.324 qpair failed and we were unable to recover it. 00:36:31.324 [2024-11-19 16:42:21.372749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.324 [2024-11-19 16:42:21.372813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.373065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.373146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.373393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.373456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.373741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.373805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.374044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.374126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.374368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.374432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.374679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.374743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.375028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.375107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.375365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.375441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.375707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.375771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.376021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.376114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.376331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.376395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.376701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.376764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.377006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.377090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.377377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.377440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.377713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.377776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.377994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.378058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.378328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.378392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.378649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.378712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.378995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.379059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.379336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.379401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.379662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.379725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.379980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.380045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.380315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.380380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.380648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.380711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.381003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.381067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.381314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.381379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.381620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.381683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.381886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.381949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.382239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.382306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.382612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.382676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.382918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.382982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.383208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.383274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.383573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.383637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.383936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.383999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.384235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.384309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.384577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.384641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.384897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.384960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.325 [2024-11-19 16:42:21.385210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.325 [2024-11-19 16:42:21.385275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.325 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.385580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.385643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.385945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.386009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.386239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.386304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.386503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.386567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.386836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.386900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.387166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.387231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.387531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.387596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.387897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.387961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.388245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.388311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.388584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.388648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.388915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.388979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.389257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.389323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.389578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.389642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.389831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.389895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.390186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.390251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.390559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.390623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.390919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.390982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.391258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.391323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.391622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.391685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.391889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.391953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.392194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.392260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.392456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.392522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.392717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.392782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.393019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.393103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.393430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.393495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.393755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.393820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.394020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.394103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.394374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.394437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.394681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.394746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.395045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.395135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.395364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.395431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.395675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.395739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.395983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.396049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.396379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.396443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.396701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.396765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.397056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.397140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.397443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.397507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.326 [2024-11-19 16:42:21.397820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.326 [2024-11-19 16:42:21.397885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.326 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.398140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.398209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.398467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.398533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.398797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.398868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.399164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.399192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.399312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.399338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.399421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.399447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.399595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.399621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.399739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.399765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.399851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.399878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.400044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.400132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.400428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.400493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.400708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.400772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.401084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.401150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.401420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.401485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.401742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.401806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.402124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.402191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.402472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.402538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.402790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.402854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.403152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.403217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.403493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.403558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.403851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.403914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.404214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.404281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.404507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.404571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.404863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.404927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.405175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.405240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.405536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.405599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.405864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.405938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.406199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.406266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.406554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.406618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.406908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.406972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.407236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.327 [2024-11-19 16:42:21.407302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.327 qpair failed and we were unable to recover it. 00:36:31.327 [2024-11-19 16:42:21.407608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.407672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.407917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.407981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.408242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.408308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.408574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.408638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.408895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.408959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.409285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.409352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.409565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.409628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.409873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.409936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.410225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.410291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.410621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.410686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.410908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.410973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.411225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.411290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.411548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.411611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.411909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.411972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.412276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.412341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.412645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.412708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.413009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.413104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.413379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.413443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.413758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.413822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.414102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.414167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.414421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.414484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.414800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.414865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.415134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.415210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.415468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.415532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.415822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.415886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.416133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.416198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.416456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.416520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.416719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.416783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.417086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.417151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.417404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.417470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.417664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.417728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.417971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.418036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.418273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.418337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.418624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.418688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.418942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.419006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.419226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.419291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.419519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.419583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.419803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.419867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.328 qpair failed and we were unable to recover it. 00:36:31.328 [2024-11-19 16:42:21.420160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.328 [2024-11-19 16:42:21.420225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.420526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.420590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.420846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.420910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.421109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.421174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.421434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.421498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.421763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.421826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.422124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.422191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.422445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.422509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.422796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.422861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.423153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.423219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.423474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.423538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.423841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.423905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.424168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.424234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.424516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.424580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.424771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.424835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.425059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.425135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.425434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.425498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.425755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.425819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.426003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.426067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.426346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.426411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.426671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.426735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.426969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.427033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.427325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.427390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.427587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.427651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.427889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.427953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.428207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.428273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.428518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.428582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.428832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.428896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.429172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.429238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.429496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.429560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.429798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.429862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.430164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.430230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.430528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.430592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.430894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.430958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.431233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.431299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.431544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.431611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.431812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.431877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.432083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.329 [2024-11-19 16:42:21.432156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.329 qpair failed and we were unable to recover it. 00:36:31.329 [2024-11-19 16:42:21.432374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.432435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.432694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.432759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.432955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.433020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.433250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.433314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.433572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.433637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.433883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.433948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.434239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.434304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.434568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.434632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.434930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.434993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.435271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.435337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.435594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.435660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.435949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.436012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.436315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.436380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.436633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.436698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.436986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.437060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.437355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.437421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.437680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.437748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.437967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.438031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.438266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.438331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.438593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.438658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.438887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.438951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.439223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.439290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.439544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.439609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.439903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.439968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.440227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.440293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.440554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.440618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.440817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.440883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.441151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.441217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.441540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.441605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.441912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.441976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.442283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.442350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.442599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.442664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.442958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.443022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.443341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.443404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.443656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.443721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.443958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.444022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.444253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.444318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.444612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.444677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.330 qpair failed and we were unable to recover it. 00:36:31.330 [2024-11-19 16:42:21.444883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.330 [2024-11-19 16:42:21.444948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.445155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.445220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.445420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.445487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.445734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.445811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.446089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.446155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.446446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.446511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.446819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.446884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.447134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.447198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.447439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.447504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.447809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.447873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.448097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.448162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.448370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.448435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.448722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.448786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.449100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.449165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.449468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.449533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.449834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.449899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.450200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.450266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.450539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.450604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.450916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.450979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.451258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.451324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.451612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.451677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.451943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.452006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.452267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.452332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.452634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.452699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.452955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.453019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.453237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.453301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.453550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.453615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.453922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.453986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.454281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.454347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.454594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.454659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.454957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.455030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.455316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.455381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.455630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.455695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.455995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.456059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.331 qpair failed and we were unable to recover it. 00:36:31.331 [2024-11-19 16:42:21.456339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.331 [2024-11-19 16:42:21.456403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.456666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.456731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.457033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.457115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.457399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.457463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.457762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.457826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.458118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.458184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.458444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.458509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.458746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.458812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.459106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.459171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.459476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.459540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.459808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.459873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.460190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.460256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.460507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.460572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.460801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.460866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.461125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.461211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.461517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.461582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.461870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.461934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.462189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.462256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.462544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.462609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.462823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.462887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.463183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.463250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.463458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.463523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.463789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.463853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.464125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.464191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.464447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.464512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.464761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.464825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.465121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.465187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.465490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.465555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.465819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.465883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.466096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.466162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.466422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.466487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.466780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.466845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.467137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.467202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.467451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.467515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.467688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.467753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.468026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.468103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.468367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.468431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.468685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.332 [2024-11-19 16:42:21.468751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.332 qpair failed and we were unable to recover it. 00:36:31.332 [2024-11-19 16:42:21.469012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.469089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.469342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.469407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.469649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.469716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.469970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.470034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.470241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.470302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.470530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.470595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.470909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.470973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.471287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.471353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.471660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.471725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.471974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.472038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.472345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.472410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.472663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.472728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.472983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.473047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.473366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.473432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.473733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.473798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.474054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.474144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.474415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.474478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.474722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.474786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.475100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.475167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.475417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.475481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.475737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.475802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.476102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.476167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.476357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.476423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.476672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.476739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.477002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.477066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.477370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.477434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.477737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.477818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.478084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.478149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.478356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.478420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.478723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.478788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.479011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.479091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.479383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.479448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.479749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.479814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.480042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.480123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.480426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.480490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.480751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.480816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.481028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.481108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.481402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.333 [2024-11-19 16:42:21.481467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.333 qpair failed and we were unable to recover it. 00:36:31.333 [2024-11-19 16:42:21.481708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.481773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.482025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.482122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.482428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.482492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.482783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.482847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.483146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.483213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.483508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.483572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.483822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.483887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.484197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.484263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.484482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.484547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.484847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.484912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.485167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.485233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.485454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.485518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.485773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.485837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.486122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.486189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.486417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.486482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.486729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.486803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.487099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.487165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.487457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.487521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.487784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.487848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.488147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.488213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.488475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.488538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.488826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.488891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.489198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.489264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.489566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.489630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.489869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.489935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.490253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.490318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.490577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.490641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.490932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.490996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.491319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.491383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.491662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.491727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.492012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.492093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.492352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.492415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.492682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.492746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.492958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.493022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.493263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.493329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.493583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.493648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.493948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.494012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.494258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.494322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.334 qpair failed and we were unable to recover it. 00:36:31.334 [2024-11-19 16:42:21.494611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.334 [2024-11-19 16:42:21.494676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.494968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.495034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.495337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.495402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.495645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.495710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.495951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.496015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.496309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.496373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.496590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.496655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.496840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.496906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.497160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.497233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.497496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.497561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.497799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.497864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.498096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.498162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.498421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.498486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.498736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.498801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.499099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.499165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.499451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.499516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.499806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.499871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.500135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.500200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.500495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.500560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.500860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.500925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.501187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.501254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.501564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.501629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.501921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.501985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.502320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.502385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.502631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.502697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.502945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.503009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.503325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.503390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.503695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.503761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.504050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.504129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.504345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.504409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.504649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.504713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.504967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.505040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.505347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.505412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.505669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.505733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.505989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.506052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.506400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.506464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.506710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.506774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.506988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.507051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.335 [2024-11-19 16:42:21.507331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.335 [2024-11-19 16:42:21.507396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.335 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.507638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.507702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.507997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.508060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.508385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.508450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.508667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.508731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.508995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.509059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.509278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.509342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.509596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.509672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.509917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.509982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.510259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.510326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.510576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.510641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.510954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.511019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.511335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.511399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.511647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.511712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.511993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.512057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.512332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.512396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.512645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.512709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.512970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.513036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.513324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.513388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.513631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.513695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.513999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.514063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.514374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.514439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.514684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.514748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.515013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.515095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.515326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.515390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.515643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.515707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.515954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.516020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.516292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.516357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.516634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.516698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.516944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.517010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.517330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.517394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.517643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.517707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.518004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.336 [2024-11-19 16:42:21.518096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.336 qpair failed and we were unable to recover it. 00:36:31.336 [2024-11-19 16:42:21.518376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.518440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.518626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.518699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.518978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.519043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.519268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.519333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.519573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.519637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.519924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.519989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.520332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.520396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.520656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.520721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.521035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.521117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.521379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.521443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.521667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.521731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.522022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.522113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.522387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.522452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.522744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.522809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.523060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.523143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.523470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.523535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.523796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.523860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.524160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.524226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.524490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.524555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.524847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.524911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.525164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.525230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.525504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.525569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.525862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.525926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.526223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.526289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.526499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.526563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.526853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.526916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.527159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.527223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.527528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.527592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.527834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.527909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.528202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.528269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.528574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.528639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.528861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.528925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.529173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.529239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.529499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.529564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.529830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.529894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.530138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.530204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.530505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.530569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.337 qpair failed and we were unable to recover it. 00:36:31.337 [2024-11-19 16:42:21.530837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.337 [2024-11-19 16:42:21.530901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.531159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.531224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.531478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.531544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.531731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.531798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.532061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.532168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.532442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.532507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.532805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.532870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.533162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.533228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.533502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.533566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.533826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.533889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.534151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.534217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.534438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.534503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.534763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.534827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.535121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.535186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.535456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.535521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.535768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.535832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.536098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.536164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.536358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.536423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.536705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.536768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.537084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.537151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.537447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.537512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.537806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.537870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.538112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.538178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.538491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.538556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.538810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.538873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.539101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.539166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.539473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.539537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.539841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.539906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.540155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.540221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.540527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.540592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.540792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.540857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.541122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.541188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.541429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.541494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.541749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.541814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.542065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.542143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.542365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.542429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.542675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.542740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.543032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.338 [2024-11-19 16:42:21.543116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.338 qpair failed and we were unable to recover it. 00:36:31.338 [2024-11-19 16:42:21.543372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.543436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.543658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.543722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.543951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.544015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.544326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.544390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.544609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.544673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.544962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.545026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.545286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.545350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.545560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.545624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.545862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.545926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.546219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.546285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.546555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.546619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.546863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.546927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.547165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.547231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.547462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.547527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.547809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.547873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.548093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.548159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.548427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.548491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.548736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.548800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.549031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.549110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.549323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.549387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.549618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.549682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.549909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.549983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.550215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.550281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.550541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.550605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.550869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.550935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.551199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.551264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.551564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.551628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.551831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.551896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.552139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.552203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.552501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.552565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.552807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.552872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.553128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.553192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.553419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.553484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.553782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.553846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.554047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.554125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.554369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.554434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.554724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.554789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.555049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.339 [2024-11-19 16:42:21.555132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.339 qpair failed and we were unable to recover it. 00:36:31.339 [2024-11-19 16:42:21.555340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.555404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.555696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.555759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.556011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.556093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.556341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.556406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.556612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.556677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.556895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.556958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.557204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.557270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.557562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.557626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.557855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.557918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.558178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.558244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.558452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.558527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.558824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.558887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.559098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.559164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.559346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.559410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.559667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.559731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.559985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.560049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.560281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.560346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.560633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.560697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.560945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.561009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.561263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.561329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.561614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.561678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.561967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.562030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.562332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.562396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.562648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.562713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.563016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.563099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.563355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.563419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.563663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.563728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.563980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.564044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.564348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.564412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.564628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.564691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.564940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.565004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.565266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.565330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.565597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.565661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.565908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.565972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.566203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.566269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.566491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.566556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.566836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.566901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.567192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.340 [2024-11-19 16:42:21.567258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.340 qpair failed and we were unable to recover it. 00:36:31.340 [2024-11-19 16:42:21.567512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.567576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.567833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.567897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.568101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.568169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.568413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.568477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.568736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.568800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.569017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.569095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.569350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.569413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.569671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.569735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.570029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.570106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.570330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.570394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.570580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.570644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.570838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.570901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.571189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.571255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.571465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.571529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.571763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.571827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.572115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.572180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.572435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.572499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.572772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.572835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.573094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.573160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.573356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.573420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.573709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.573772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.574024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.574121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.574345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.574410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.574609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.574672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.574925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.574989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.575219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.575285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.575486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.575550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.575803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.575867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.576087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.576153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.576435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.576499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.576754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.576818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.577040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.577122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.577378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.341 [2024-11-19 16:42:21.577441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.341 qpair failed and we were unable to recover it. 00:36:31.341 [2024-11-19 16:42:21.577660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.577724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.577979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.578043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.578281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.578345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.578557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.578621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.578854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.578919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.579211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.579276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.579566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.579631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.579883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.579958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.580207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.580271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.580467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.580531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.580789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.580853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.581055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.581132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.581323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.581387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.581644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.581709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.581920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.581983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.582237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.582303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.582516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.582580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.582779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.582843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.583045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.583130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.583335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.583398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.583666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.583729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.583960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.584025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.584293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.584357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.584603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.584666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.584913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.584977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.585201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.585266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.585501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.585565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.585822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.585886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.586174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.586241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.586495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.586560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.586814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.586878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.587106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.587172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.587430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.587494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.587781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.587845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.588060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.588147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.588336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.588401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.588645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.588709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.588948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.589012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.342 qpair failed and we were unable to recover it. 00:36:31.342 [2024-11-19 16:42:21.589320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.342 [2024-11-19 16:42:21.589385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.589678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.589742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.589991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.590055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.590366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.590429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.590673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.590737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.590937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.591001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.591275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.591340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.591550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.591614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.591857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.591921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.592135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.592201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.592504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.592568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.592863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.592927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.593181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.593247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.593458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.593522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.593734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.593798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.594038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.594115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.594404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.594468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.594690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.594754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.594950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.595014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.595256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.595320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.595567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.595631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.595823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.595888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.596145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.596212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.596418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.596492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.596740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.596805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.597005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.597084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.597319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.597383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.597624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.597688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.597879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.597943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.598190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.598255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.598472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.598537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.598796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.598860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.599058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.599140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.599407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.599472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.599713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.599776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.600042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.600139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.600437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.600501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.600760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.343 [2024-11-19 16:42:21.600824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.343 qpair failed and we were unable to recover it. 00:36:31.343 [2024-11-19 16:42:21.601118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.601185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.601430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.601494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.601690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.601754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.601988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.602051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.602281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.602345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.602556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.602620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.602820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.602885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.603103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.603169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.603436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.603500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.603762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.603826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.604083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.604149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.604366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.604429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.604646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.604710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.604925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.604989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.605261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.605326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.605630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.605695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.605954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.606019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.606231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.606295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.606552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.606616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.606859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.606923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.607146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.607212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.607430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.607495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.607730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.607795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.608049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.608139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.608383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.608448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.608698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.608763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.608966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.609030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.609276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.609340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.609635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.609700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.609957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.610020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.610276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.610340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.610596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.610661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.610955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.611019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.611286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.611349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.611638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.611703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.611923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.611987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.612259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.612324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.344 [2024-11-19 16:42:21.612582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.344 [2024-11-19 16:42:21.612646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.344 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.612891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.612956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.613184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.613249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.613510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.613574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.613826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.613891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.614181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.614247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.614463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.614527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.614785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.614849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.615109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.615174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.615412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.615476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.615672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.615737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.615992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.616056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.616343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.616407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.616662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.616726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.616936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.617000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.617297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.617361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.617602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.617676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.617907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.617971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.618206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.618271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.618475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.618538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.618771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.618835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.619048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.619131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.619350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.619414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.619623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.619687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.619894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.619958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.620219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.620285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.620530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.620593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.620776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.620841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.621129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.621196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.621396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.621460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.621670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.621734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.621974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.622038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.622325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.622389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.622671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.345 [2024-11-19 16:42:21.622735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.345 qpair failed and we were unable to recover it. 00:36:31.345 [2024-11-19 16:42:21.623030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.623107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.623366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.623430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.623675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.623739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.623925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.623990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.624259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.624324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.624621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.624686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.624952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.625016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.625304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.625368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.625634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.625699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.625912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.625987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.626216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.626281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.626521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.626585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.626790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.626854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.627110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.627176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.627471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.627536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.627804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.627870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.628117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.628183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.628443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.628507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.628734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.628798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.628991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.629057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.629322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.629386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.629671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.629735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.630019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.630097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.630352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.630418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.630621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.630684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.630913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.630978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.631216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.631281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.631485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.631549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.631848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.631913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.632156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.632222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.632469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.632534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.632804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.632867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.633067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.633158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.633410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.633475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.633677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.633741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.633955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.634020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.634250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.634313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.634587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.346 [2024-11-19 16:42:21.634652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.346 qpair failed and we were unable to recover it. 00:36:31.346 [2024-11-19 16:42:21.634940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.635004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.635340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.635405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.635625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.635690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.635926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.635990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.636290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.636355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.636644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.636709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.636922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.636987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.637227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.637294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.637587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.637653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.637938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.638001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.638248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.638314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.638521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.638587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.638900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.638973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.639282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.639348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.639575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.639639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.639927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.639992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.640255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.640320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.640577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.640641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.640929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.640995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.641282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.641348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.641609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.641684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.641989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.642054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.642267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.642332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.642546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.642610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.642864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.642928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.643183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.643271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.643581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.643647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.347 qpair failed and we were unable to recover it. 00:36:31.347 [2024-11-19 16:42:21.643916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.347 [2024-11-19 16:42:21.643982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.630 qpair failed and we were unable to recover it. 00:36:31.630 [2024-11-19 16:42:21.644179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.630 [2024-11-19 16:42:21.644246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.630 qpair failed and we were unable to recover it. 00:36:31.630 [2024-11-19 16:42:21.644512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.630 [2024-11-19 16:42:21.644576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.630 qpair failed and we were unable to recover it. 00:36:31.630 [2024-11-19 16:42:21.644850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.630 [2024-11-19 16:42:21.644917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.630 qpair failed and we were unable to recover it. 00:36:31.630 [2024-11-19 16:42:21.645179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.630 [2024-11-19 16:42:21.645248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.630 qpair failed and we were unable to recover it. 00:36:31.630 [2024-11-19 16:42:21.645466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.630 [2024-11-19 16:42:21.645531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.630 qpair failed and we were unable to recover it. 00:36:31.630 [2024-11-19 16:42:21.645789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.630 [2024-11-19 16:42:21.645855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.630 qpair failed and we were unable to recover it. 00:36:31.630 [2024-11-19 16:42:21.646089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.630 [2024-11-19 16:42:21.646155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.646385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.646450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.646679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.646744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.646941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.647004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.647239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.647304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.647528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.647603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.647850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.647914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.648128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.648196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.648431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.648497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.648754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.648820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.649099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.649165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.649432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.649496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.649739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.649803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.650021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.650099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.650363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.650427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.650698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.650762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.650954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.651018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.651293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.651363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.651631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.651695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.651925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.651997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.652278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.652344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.652618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.652682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.652901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.652966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.653247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.653313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.653570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.653634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.653882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.653946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.654257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.654323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.654573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.654637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.654884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.654948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.655202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.655268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.655528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.655592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.655877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.655945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.656198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.656276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.656544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.656608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.656814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.656879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.657134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.657200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.657438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.657502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.657768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.657833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.631 qpair failed and we were unable to recover it. 00:36:31.631 [2024-11-19 16:42:21.658064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.631 [2024-11-19 16:42:21.658145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.658372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.658436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.658680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.658744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.659020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.659103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.659371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.659436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.659640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.659709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.659941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.660004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.660287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.660353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.660639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.660703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.660908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.660972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.661259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.661326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.661585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.661650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.661914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.661979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.662255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.662321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.662569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.662633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.662878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.662942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.663202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.663267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.663555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.663620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.663917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.663981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.664257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.664321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.664563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.664636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.664879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.664953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.665207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.665273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.665520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.665590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.665847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.665911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.666222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.666289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.666594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.666658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.666957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.667032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.667338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.667406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.667612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.667677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.667977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.668041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.668322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.668387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.668660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.668725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.669006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.669087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.669303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.669376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.669670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.669736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.669992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.670065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.670385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.632 [2024-11-19 16:42:21.670450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.632 qpair failed and we were unable to recover it. 00:36:31.632 [2024-11-19 16:42:21.670712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.670777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.671098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.671164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.671452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.671517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.671730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.671795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.672046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.672131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.672435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.672500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.672767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.672831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.673036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.673120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.673395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.673460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.673682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.673746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.674003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.674108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.674381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.674447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.674710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.674774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.675088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.675157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.675440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.675504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.675796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.675870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.676136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.676201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.676453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.676517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.676805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.676870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.677120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.677185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.677477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.677545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.677805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.677870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.678123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.678188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.678418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.678490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.678716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.678781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.679035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.679114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.679397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.679461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.679712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.679776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.680095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.680160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.680423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.680488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.680788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.680852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.681119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.681185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.681442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.681507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.681701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.681766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.682095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.682161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.682379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.682445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.682729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.682793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.683050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.633 [2024-11-19 16:42:21.683139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.633 qpair failed and we were unable to recover it. 00:36:31.633 [2024-11-19 16:42:21.683433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.683497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.683784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.683850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.684156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.684223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.684475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.684539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.684788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.684852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.685154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.685221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.685517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.685592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.685839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.685903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.686121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.686186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.686384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.686446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.686673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.686737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.687036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.687115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.687368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.687432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.687693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.687767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.687988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.688058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.688323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.688388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.688675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.688739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.689028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.689109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.689370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.689436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.689735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.689799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.690064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.690162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.690356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.690420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.690711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.690775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.691005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.691091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.691345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.691419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.691685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.691749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.692013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.692103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.692419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.692491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.692793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.692859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.693164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.693229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.693527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.693591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.693797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.693862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.694171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.694239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.694526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.694596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.694859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.694923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.695144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.695208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.695452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.695515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.634 [2024-11-19 16:42:21.695780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.634 [2024-11-19 16:42:21.695845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.634 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.696124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.696189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.696455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.696519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.696753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.696828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.697140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.697206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.697462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.697526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.697763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.697827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.698055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.698152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.698390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.698454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.698751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.698817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.699036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.699127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.699380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.699454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.699762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.699826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.700031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.700108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.700400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.700475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.700760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.700823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.701107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.701172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.701474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.701539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.701826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.701890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.702164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.702230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.702528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.702592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.702900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.702963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.703188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.703254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.703516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.703579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.703879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.703942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.704180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.704244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.704460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.704525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.704813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.704877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.705096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.705161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.705443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.705509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.705812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.705875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.706132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.706198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.635 qpair failed and we were unable to recover it. 00:36:31.635 [2024-11-19 16:42:21.706451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.635 [2024-11-19 16:42:21.706516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.706810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.706874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.707183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.707248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.707550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.707614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.707863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.707928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.708173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.708237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.708501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.708565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.708873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.708937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.709184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.709248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.709502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.709566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.709868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.709933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.710205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.710271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.710591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.710656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.710940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.711004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.711229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.711297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.711548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.711612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.711870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.711933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.712229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.712294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.712592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.712668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.712950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.713023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.713324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.713394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.713640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.713704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.713990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.714053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.714369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.714444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.714758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.714823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.715121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.715186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.715408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.715473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.715753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.715818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.716089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.716163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.716428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.716493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.716790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.716855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.717126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.717192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.717477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.717542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.717794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.717857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.718101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.718167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.718462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.718526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.718776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.718840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.719097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.719163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.636 [2024-11-19 16:42:21.719430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.636 [2024-11-19 16:42:21.719495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.636 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.719792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.719865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.720168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.720235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.720440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.720504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.720761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.720825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.721133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.721199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.721503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.721568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.721816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.721880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.722098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.722164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.722433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.722498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.722701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.722765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.723012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.723106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.723412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.723476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.723775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.723839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.724046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.724130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.724433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.724498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.724752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.724816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.725067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.725147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.725403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.725468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.725663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.725729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.725969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.726033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.726349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.726414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.726719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.726784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.727092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.727157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.727359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.727426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.727732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.727804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.728106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.728171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.728420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.728485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.728699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.728784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.729085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.729150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.729394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.729464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.729663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.729727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.729973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.730038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.730277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.730341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.730606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.730670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.730916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.730987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.731245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.731312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.731622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.731686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.731883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.637 [2024-11-19 16:42:21.731947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.637 qpair failed and we were unable to recover it. 00:36:31.637 [2024-11-19 16:42:21.732225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.732291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.732591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.732657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.732888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.732952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.733243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.733310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.733612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.733677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.733936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.734000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.734271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.734336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.734583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.734650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.734940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.735004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.735254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.735320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.735580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.735645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.735928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.735999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.736265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.736330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.736620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.736686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.736947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.737010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.737325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.737390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.737683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.737758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.738010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.738090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.738390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.738455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.738717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.738782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.739038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.739124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.739342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.739406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.739663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.739727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.739928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.739992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.740295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.740361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.740617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.740681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.740937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.741003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.741280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.741346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.741594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.741658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.741914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.741979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.742263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.742330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.742628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.742692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.742932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.742995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.743278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.743344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.743592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.743656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.743901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.743965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.744232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.744299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.744511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.638 [2024-11-19 16:42:21.744575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.638 qpair failed and we were unable to recover it. 00:36:31.638 [2024-11-19 16:42:21.744863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.744926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.745227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.745293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.745535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.745600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.745851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.745914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.746210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.746275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.746496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.746560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.746854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.746918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.747204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.747271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.747562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.747626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.747909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.747972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.748235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.748300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.748549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.748613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.748848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.748912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.749215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.749281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.749577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.749642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.749904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.749968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.750231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.750295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.750602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.750668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.750914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.750978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.751258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.751324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.751565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.751628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.751877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.751941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.752236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.752302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.752588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.752652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.752882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.752947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.753242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.753308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.753504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.753568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.753814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.753878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.754129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.754195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.754433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.754497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.754688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.754751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.755038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.755115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.755338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.755401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.755700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.755765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.756053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.756130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.756391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.756455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.756760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.756824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.639 [2024-11-19 16:42:21.757103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.639 [2024-11-19 16:42:21.757169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.639 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.757413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.757477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.757759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.757823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.758018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.758094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.758336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.758400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.758603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.758667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.758872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.758935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.759182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.759247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.759505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.759569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.759863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.759937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.760145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.760210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.760426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.760492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.760788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.760853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.761163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.761229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.761486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.761550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.761786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.761850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.762064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.762141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.762394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.762459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.762659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.762723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.763017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.763093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.763357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.763421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.763639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.763703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.764008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.764085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.764386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.764451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.764736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.764800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.765040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.765123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.765341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.765405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.765687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.765751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.766050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.766131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.766404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.766469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.766720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.766784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.767046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.767145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.767407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.767471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.640 [2024-11-19 16:42:21.767759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.640 [2024-11-19 16:42:21.767822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.640 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.768118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.768184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.768388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.768454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.768680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.768753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.769040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.769120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.769330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.769395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.769675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.769738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.770034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.770111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.770382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.770446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.770660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.770723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.770975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.771039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.771347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.771412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.771663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.771727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.771919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.771982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.772281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.772345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.772589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.772653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.772962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.773026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.773341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.773406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.773650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.773715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.773965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.774029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.774309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.774373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.774665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.774729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.774981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.775045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.775292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.775356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.775602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.775669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.775926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.775990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.776296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.776361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.776668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.776732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.777027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.777110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.777363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.777427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.777688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.777752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.778059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.778141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.778350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.778415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.778620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.778686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.778939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.779002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.779321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.779389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.779635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.779699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.779912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.779976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.780271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.641 [2024-11-19 16:42:21.780337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.641 qpair failed and we were unable to recover it. 00:36:31.641 [2024-11-19 16:42:21.780562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.780627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.780922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.780986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.781298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.781363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.781591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.781655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.781944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.782008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.782360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.782464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b9/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 403424 Killed "${NVMF_APP[@]}" "$@" 00:36:31.642 0 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.782773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:31.642 [2024-11-19 16:42:21.782855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:31.642 [2024-11-19 16:42:21.783190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.783270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b9 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:31.642 0 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:31.642 [2024-11-19 16:42:21.783659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.783740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.784120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.784200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.784547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.784625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.784989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.785067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.785404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.785486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.785848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.785924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.786262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.786340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.786702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=403973 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:31.642 [2024-11-19 16:42:21.786779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 403973 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 403973 ']' 00:36:31.642 [2024-11-19 16:42:21.787139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:31.642 [2024-11-19 16:42:21.787220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:31.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:31.642 [2024-11-19 16:42:21.787584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 16:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:31.642 [2024-11-19 16:42:21.787660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.788024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.788124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.788493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.788568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.788940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.789017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.789340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.789431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.789780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.789858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.790214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.790292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.790667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.790767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.791089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.791163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.791317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.791361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.791537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.791580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.791783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.791822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.791996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.792035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.642 [2024-11-19 16:42:21.792190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.642 [2024-11-19 16:42:21.792227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.642 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.792420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.792466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.792610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.792644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.792760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.792793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.792908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.792941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.793076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.793110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.793256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.793288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.793394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.793425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.793571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.793602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.793710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.793742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.793892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.793924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.794041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.794080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.794192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.794224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.794334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.794367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.794515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.794548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.794694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.794726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.794861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.794894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.795001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.795033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.795178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.795225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.795343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.795375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.795494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.795525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.796086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.796125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.796299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.796334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.796473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.796507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.796615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.796648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.796756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.796788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.800085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.800135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.800250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.800281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.800423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.800451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.800552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.800579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.800663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.800688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.800811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.800838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.800922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.800949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.801041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.801075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.801159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.801191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.801288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.801317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.801421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.801449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.801567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.801595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.643 [2024-11-19 16:42:21.801718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.643 [2024-11-19 16:42:21.801748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.643 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.801847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.801875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.801973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.802001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.802107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.802136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.802282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.802309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.802456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.802485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.802570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.802598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.802740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.802770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.802874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.802903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.803002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.803030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.803160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.803198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.803321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.803347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.803427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.803453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.803539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.803565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.803673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.803698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.803810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.803835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.803919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.803944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.804049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.804081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.804191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.804217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.804302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.804327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.804410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.804436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.804544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.804569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.804666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.804692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.804785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.804816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.804909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.804933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.805028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.805054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.805169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.805194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.805275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.805301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.805396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.805421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.805497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.805521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.805636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.805661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.805771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.805796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.805870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.805895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.805989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.806015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.644 [2024-11-19 16:42:21.806125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.644 [2024-11-19 16:42:21.806151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.644 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.806227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.806253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.806336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.806361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.806497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.806523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.806645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.806670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.806765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.806791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.806901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.806926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.807049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.807080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.807170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.807195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.807282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.807308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.807424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.807449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.807526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.807551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.807653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.807678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.807791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.807817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.807894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.807920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.808013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.808038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.808124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.808154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.808269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.808295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.808383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.808407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.808520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.808546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.808684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.808708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.808822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.808848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.808943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.808967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.809058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.809090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.809185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.809210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.809291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.809317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.809412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.809437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.809524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.809548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.809667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.809692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.809770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.809796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.809907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.809943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.810047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.810084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.810191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.810219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.810342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.810372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.810493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.810523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.810611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.810639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.810732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.810759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.810854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.810879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.645 [2024-11-19 16:42:21.810967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.645 [2024-11-19 16:42:21.810992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.645 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.811095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.811121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.811211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.811236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.811354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.811378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.811495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.811520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.811637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.811662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.811755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.811781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.811900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.811925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.812039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.812065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.812154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.812178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.812273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.812299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.812396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.812421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.812495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.812521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.812604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.812629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.812739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.812765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.812868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.812901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.812992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.813020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.813360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.813390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.813527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.813555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.813664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.813692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.813793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.813821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.813936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.813962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.814079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.814106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.814225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.814251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.814338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.814363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.814492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.814525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.814649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.814695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.814775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.814802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.814897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.814922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.815009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.815035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.815143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.815169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.815279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.815305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.815392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.815448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.815562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.815590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.815680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.815709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.815813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.815840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.815930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.815957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.816095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.816139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.646 [2024-11-19 16:42:21.816229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.646 [2024-11-19 16:42:21.816255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.646 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.816343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.816369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.816453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.816479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.816561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.816586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.816702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.816729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.816820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.816863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.816959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.816987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.817101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.817127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.817243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.817269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.817388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.817430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.817565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.817592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.817686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.817713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.817802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.817830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.817920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.817948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.818049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.818082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.818194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.818220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.818294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.818320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.818409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.818435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.818541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.818568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.818752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.818779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.818868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.818895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.819026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.819064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.819181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.819207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.819322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.819347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.819447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.819472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.819561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.819588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.819678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.819705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.819789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.819815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.819910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.819936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.820066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.820137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.820228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.820253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.820335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.820360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.820442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.820468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.820560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.820585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.820685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.820712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.820856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.820898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.821036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.647 [2024-11-19 16:42:21.821079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.647 qpair failed and we were unable to recover it. 00:36:31.647 [2024-11-19 16:42:21.821267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.821298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.821418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.821461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.821548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.821574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.821691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.821733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.821850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.821876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.821965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.821991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.822101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.822127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.822264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.822290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.822408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.822433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.822519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.822544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.822638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.822664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.822751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.822777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.822862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.822888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.823001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.823026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.823139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.823165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.823258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.823284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.823373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.823398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.823473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.823497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.823584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.823608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.823717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.823742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.823829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.823854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.823976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.824015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.824112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.824141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.824232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.824258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.824345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.824371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.824502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.824538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.824686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.824726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.824822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.824849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.824980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.825006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.825100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.825126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.825217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.825243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.825355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.825381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.825493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.825518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.825603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.825628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.825721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.825748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.825839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.825865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.825948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.825974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.826116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.826143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.648 qpair failed and we were unable to recover it. 00:36:31.648 [2024-11-19 16:42:21.826247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.648 [2024-11-19 16:42:21.826273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.826389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.826428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.826551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.826577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.826673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.826699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.826816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.826842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.826922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.826947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.827059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.827090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.827172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.827199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.827309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.827335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.827412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.827438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.827587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.827614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.827694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.827720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.827829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.827854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.827939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.827965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.828103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.828143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.828290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.828318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.828409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.828436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.828552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.828580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.828696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.828722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.828836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.828863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.828977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.829003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.829107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.829133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.829280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.829306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.829420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.829446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.829551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.829576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.829667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.829695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.829833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.829861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.829975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.830005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.830098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.830124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.830215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.830241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.830332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.830358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.830442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.830468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.830544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.649 [2024-11-19 16:42:21.830569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.649 qpair failed and we were unable to recover it. 00:36:31.649 [2024-11-19 16:42:21.830642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.830667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.830746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.830772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.830909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.830935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.831047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.831077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.831192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.831217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.831354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.831379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.831464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.831490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.831574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.831600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.831719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.831747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.831868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.831894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.831978] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:36:31.650 [2024-11-19 16:42:21.832018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.832047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 [2024-11-19 16:42:21.832050] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.832142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.832168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.832262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.832287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.832372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.832397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.832536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.832561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.832638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.832664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.832783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.832811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.832930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.832958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.833042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.833080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.833169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.833195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.833282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.833310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.833393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.833420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.833565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.833593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.833705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.833731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.833839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.833865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.833981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.834007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.834099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.834127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.834266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.834292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.834380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.834406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.834490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.834517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.834629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.834655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.834739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.834765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.834874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.834901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.834989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.835015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.835147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.835177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.835264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.835292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.835426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.650 [2024-11-19 16:42:21.835452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.650 qpair failed and we were unable to recover it. 00:36:31.650 [2024-11-19 16:42:21.835540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.835567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.835683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.835709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.835826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.835852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.835946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.835973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.836064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.836108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.836194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.836221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.836342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.836368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.836462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.836489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.836574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.836601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.836722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.836750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.836871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.836898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.837013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.837041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.837170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.837197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.837309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.837335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.837422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.837448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.837559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.837585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.837691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.837717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.837823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.837850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.837976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.838015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.838103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.838132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.838230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.838257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.838350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.838376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.838459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.838485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.838599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.838630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.838715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.838740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.838855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.838881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.838999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.839026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.839156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.839183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.839267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.839293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.839386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.839412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.839498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.839524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.839667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.839693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.839805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.839831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.839972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.839998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.840087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.840114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.840253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.840279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.840362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.840387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.840473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.651 [2024-11-19 16:42:21.840499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.651 qpair failed and we were unable to recover it. 00:36:31.651 [2024-11-19 16:42:21.840609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.840634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.840720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.840746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.840847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.840887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.841037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.841065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.841196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.841224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.841335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.841361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.841484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.841510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.841627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.841654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.841748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.841784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.841908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.841934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.842065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.842110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.842203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.842230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.842326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.842359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.842480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.842506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.842616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.842642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.842789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.842815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.842899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.842927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.843047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.843081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.843168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.843194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.843285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.843311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.843402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.843428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.843568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.843594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.843682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.843709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.843822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.843849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.843959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.843985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.844099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.844126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.844245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.844272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.844366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.844400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.844513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.844539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.844655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.844682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.844798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.844826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.844937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.844964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.845080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.845109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.845206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.845233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.845351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.845377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.845463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.845489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.845607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.845633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.845749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.652 [2024-11-19 16:42:21.845776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.652 qpair failed and we were unable to recover it. 00:36:31.652 [2024-11-19 16:42:21.845870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.845896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.846010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.846041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.846136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.846163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.846276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.846302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.846415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.846443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.846554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.846580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.846665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.846691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.846815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.846842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.846930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.846958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.847074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.847113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.847237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.847265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.847349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.847375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.847458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.847484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.847591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.847617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.847752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.847778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.847897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.847924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.848061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.848094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.848211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.848236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.848316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.848343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.848463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.848489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.848605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.848631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.848746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.848772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.848913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.848939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.849058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.849093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.849252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.849280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.849369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.849396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.849501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.849527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.849642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.849667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.849758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.849785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.849892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.849917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.850004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.850030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.850143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.850169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.850284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.850310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.653 qpair failed and we were unable to recover it. 00:36:31.653 [2024-11-19 16:42:21.850424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.653 [2024-11-19 16:42:21.850449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.850539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.850565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.850679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.850705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.850792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.850818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.850933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.850959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.851088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.851115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.851204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.851230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.851319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.851345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.851458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.851483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.851580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.851606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.851724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.851750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.851888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.851913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.852025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.852050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.852163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.852190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.852295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.852320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.852443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.852469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.852610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.852636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.852716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.852743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.852861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.852887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.852975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.853001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.853117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.853144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.853286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.853311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.853432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.853458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.853598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.853626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.853744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.853769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.853847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.853873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.853964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.853990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.854074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.854101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.854206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.854232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.854374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.854399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.854479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.854505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.854585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.854610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.854704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.854730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.854820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.854845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.854956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.854981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.654 qpair failed and we were unable to recover it. 00:36:31.654 [2024-11-19 16:42:21.855097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.654 [2024-11-19 16:42:21.855128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.855215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.855241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.855384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.855422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.855537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.855564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.855656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.855682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.855800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.855825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.855911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.855938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.856048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.856081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.856170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.856196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.856335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.856361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.856476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.856503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.856618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.856643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.856788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.856815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.856945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.856984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.857117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.857146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.857264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.857291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.857403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.857430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.857540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.857567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.857682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.857709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.857805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.857832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.857917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.857943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.858086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.858113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.858220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.858246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.858335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.858361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.858475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.858502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.858629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.858656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.858742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.858769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.858881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.858912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.859031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.859057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.859155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.859181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.859294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.859320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.859431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.859457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.859566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.859592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.859684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.859712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.655 [2024-11-19 16:42:21.859826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.655 [2024-11-19 16:42:21.859855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.655 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.859971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.859998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.860146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.860173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.860290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.860316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.860422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.860448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.860563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.860589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.860676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.860702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.860796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.860822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.860907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.860934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.861043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.861074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.861194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.861219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.861304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.861331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.861454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.861479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.861563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.861588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.861700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.861726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.861810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.861838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.861929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.861955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.862066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.862098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.862191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.862217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.862326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.862352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.862466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.862492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.862581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.862607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.862693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.862719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.862839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.862867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.862961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.862988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.863108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.863134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.863220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.863248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.863362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.863394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.863486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.863512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.863624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.863651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.863770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.863797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.863909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.863935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.864059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.864101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.864201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.864231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.864338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.864364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.656 qpair failed and we were unable to recover it. 00:36:31.656 [2024-11-19 16:42:21.864473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.656 [2024-11-19 16:42:21.864499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.864615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.864640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.864719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.864744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.864838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.864864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.864980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.865007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.865123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.865149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.865229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.865255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.865342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.865368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.865487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.865512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.865619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.865645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.865727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.865754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.865838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.865863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.865975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.866001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.866088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.866115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.866229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.866254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.866349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.866374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.866454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.866480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.866595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.866622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.866713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.866739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.866851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.866877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.866969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.866995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.867091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.867117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.867202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.867229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.867344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.867370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.867451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.867477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.867622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.867652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.867772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.867797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.867910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.867936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.868079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.868105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.868236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.868262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.868368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.868393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.868510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.868538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.868630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.868656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.868775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.868802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.868920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.657 [2024-11-19 16:42:21.868946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.657 qpair failed and we were unable to recover it. 00:36:31.657 [2024-11-19 16:42:21.869084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.869110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.869233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.869272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.869403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.869431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.869548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.869574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.869669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.869703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.869807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.869834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.869946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.869972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.870054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.870085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.870200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.870225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.870335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.870361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.870490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.870518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.870638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.870666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.870757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.870784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.870898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.870925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.871003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.871030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.871180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.871207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.871334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.871360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.871471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.871503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.871602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.871628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.871726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.871753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.871869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.871897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.872016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.872042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.872145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.872172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.872293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.872319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.872415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.872440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.872556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.872581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.872728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.872753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.872838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.872865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.872980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.873006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.873094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.658 [2024-11-19 16:42:21.873120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.658 qpair failed and we were unable to recover it. 00:36:31.658 [2024-11-19 16:42:21.873201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.873228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.873326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.873354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.873465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.873503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.873593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.873620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.873708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.873734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.873806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.873832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.873945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.873971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.874098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.874125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.874209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.874234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.874314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.874340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.874479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.874505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.874586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.874611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.874687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.874712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.874821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.874846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.874948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.874987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.875089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.875118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.875237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.875264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.875413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.875439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.875583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.875608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.875699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.875727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.875846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.875874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.876017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.876043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.876147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.876174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.876266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.876292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.876374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.876399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.876537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.876563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.876681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.876709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.876823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.876849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.876973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.877001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.877099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.877126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.877241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.877267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.877356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.877382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.877520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.877546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.877667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.877694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.877782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.877810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.877958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.877985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.878081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.659 [2024-11-19 16:42:21.878108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.659 qpair failed and we were unable to recover it. 00:36:31.659 [2024-11-19 16:42:21.878198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.878223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.878339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.878364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.878447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.878472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.878592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.878617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.878709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.878737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.878826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.878852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.878942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.878968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.879081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.879108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.879200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.879226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.879316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.879342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.879431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.879457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.879570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.879596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.879686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.879715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.879833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.879860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.879946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.879972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.880083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.880109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.880221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.880246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.880358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.880388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.880504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.880529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.880662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.880688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.880798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.880823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.880936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.880964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.881095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.881135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.881261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.881287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.881381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.881406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.881525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.881550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.881665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.881690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.881784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.881812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.881931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.881956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.882081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.882107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.882199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.882224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.882317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.882342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.882426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.882451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.882563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.882589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.882706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.882731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.882805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.882831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.882948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.882973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.660 qpair failed and we were unable to recover it. 00:36:31.660 [2024-11-19 16:42:21.883057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.660 [2024-11-19 16:42:21.883088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.883183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.883208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.883287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.883313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.883429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.883454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.883573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.883599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.883689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.883714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.883791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.883817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.883933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.883962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.884055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.884090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.884180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.884207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.884328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.884355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.884482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.884508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.884606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.884633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.884724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.884749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.884839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.884866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.884986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.885011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.885105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.885131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.885246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.885271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.885359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.885385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.885471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.885497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.885571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.885596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.885686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.885711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.885788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.885813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.885903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.885928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.886000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.886025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.886146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.886172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.886279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.886305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.886411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.886436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.886517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.886543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.886681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.886709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.886798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.886826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.886945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.886970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.887082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.887108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.887190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.887215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.887297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.887328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.887421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.887448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.887530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.887555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.661 qpair failed and we were unable to recover it. 00:36:31.661 [2024-11-19 16:42:21.887671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.661 [2024-11-19 16:42:21.887696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.887775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.887799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.887906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.887931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.888043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.888073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.888188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.888214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.888332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.888357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.888474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.888500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.888638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.888663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.888756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.888793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.888915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.888942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.889040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.889076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.889171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.889197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.889283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.889310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.889428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.889453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.889574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.889601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.889690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.889715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.889820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.889849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.889935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.889961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.890067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.890101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.890181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.890208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.890348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.890374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.890454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.890480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.890593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.890619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.890737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.890764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.890878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.890908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.891022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.891047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.891154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.891181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.891302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.891327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.891441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.891466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.891553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.891578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.891694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.891720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.891800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.891826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.891938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.891963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.892054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.892084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.892191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.892217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.892334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.892359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.892472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.892497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.892609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.662 [2024-11-19 16:42:21.892634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.662 qpair failed and we were unable to recover it. 00:36:31.662 [2024-11-19 16:42:21.892751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.892779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.892867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.892893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.893010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.893036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.893189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.893216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.893334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.893360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.893481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.893507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.893631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.893658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.893750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.893775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.893888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.893913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.894041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.894065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.894164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.894190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.894298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.894323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.894415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.894440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.894580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.894609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.894704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.894730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.894847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.894875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.894968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.894994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.895104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.895131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.895217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.895243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.895329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.895355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.895471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.895496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.895604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.895630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.895769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.895795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.895935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.895960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.896081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.896107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.896248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.896273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.896390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.896416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.896537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.896562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.896650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.896676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.896754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.896780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.896935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.896973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.897101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.897130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.897247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.663 [2024-11-19 16:42:21.897272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.663 qpair failed and we were unable to recover it. 00:36:31.663 [2024-11-19 16:42:21.897413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.897439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.897557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.897582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.897695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.897720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.897831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.897855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.897934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.897959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.898088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.898127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.898224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.898251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.898375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.898401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.898485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.898511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.898623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.898648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.898791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.898817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.898934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.898961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.899107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.899136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.899232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.899259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.899350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.899377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.899494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.899520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.899661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.899686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.899777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.899803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.899893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.899920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.900009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.900034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.900127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.900159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.900240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.900265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.900358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.900384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.900490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.900515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.900602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.900629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.900708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.900734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.900842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.900867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.900949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.900974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.901085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.901111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.901253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.901278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.901368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.901394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.901532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.901558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.901642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.901668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.901752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.901778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.901892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.901918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.902019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.902058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.902168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.664 [2024-11-19 16:42:21.902195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.664 qpair failed and we were unable to recover it. 00:36:31.664 [2024-11-19 16:42:21.902314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.902339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.902449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.902474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.902563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.902589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.902675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.902703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.902811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.902837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.902948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.902975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.903089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.903115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.903229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.903256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.903343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.903369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.903455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.903481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.903627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.903653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.903792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.903819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.903909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.903935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.904051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.904082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.904166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.904193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.904343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.904374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.904461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.904489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.904604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.904629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.904747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.904774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.904894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.904920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.905024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.905050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.905170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.905196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.905309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.905334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.905420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.905447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.905564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.905590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.905699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.905726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.905810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.905835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.905929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.905956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.906059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.906093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.906192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.906230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.906318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.906344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.906432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.906458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.906581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.906606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.906725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.906750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.906846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.906885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.906982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.907011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.907092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.907118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.907202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.665 [2024-11-19 16:42:21.907229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.665 qpair failed and we were unable to recover it. 00:36:31.665 [2024-11-19 16:42:21.907345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.907371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.907478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.907505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.907590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.907617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.907730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.907759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.907876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.907902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.907991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.908016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.908168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.908195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.908311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.908337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.908450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.908476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.908557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.908582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.908696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.908721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.908809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.908837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.908958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.908991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.909105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.909131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.909247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.909273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.909365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.909391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.909537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.909563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.909659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.909687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.909793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.909819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.909910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.909935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.910024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.910050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.910155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.910193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.910290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.910316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.910455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.910481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.910606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.910631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.910708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.910734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.910895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.910935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.911031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.911058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.911181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.911206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.911317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.911343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.911449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.911475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.666 [2024-11-19 16:42:21.911567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.666 [2024-11-19 16:42:21.911592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.666 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.911677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.911704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.911806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.911845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.911993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.912019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.912129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.912156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.912237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.912263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.912394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.912420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.912504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.912529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.912618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.912645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.912730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.912756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.912867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.912893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.912978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.913003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.913122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.913149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.913231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.913256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.913372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.913397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.913482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.913507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.913614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.913639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.913743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.913768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.913881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.913905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.914046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.914078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.914167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.914196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.914287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.914312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.914403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.914429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.914546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.914572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.914659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.914685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.914771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.914796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.914886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.914915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.915058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.915090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.915204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.915229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.915309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.915334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.915447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.915472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.915557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.915582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.915673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.915699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.915812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.915837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.915920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.915946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.916090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.916118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.916204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.916231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.916351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.916376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.667 [2024-11-19 16:42:21.916517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.667 [2024-11-19 16:42:21.916543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.667 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.916660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.916686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.916785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.916811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.916906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.916932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.917024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.917050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.917145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.917171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.917296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.917321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.917429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.917455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.917569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.917594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.917680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.917705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.917813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.917839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.917960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.917986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.918065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.918098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.918177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.918203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.918284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.918309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.918393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.918418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.918507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.918533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.918617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.918642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.918756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.918781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.918886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.918911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.918996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.919021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.919133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.919159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.919274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.919299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.919382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.919406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.919527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.919555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.919639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.919664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.919805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.919831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.919912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.919938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.920052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.920084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.920180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.920205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.920314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.920340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.920427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.920453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.920569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.920594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.920710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.920737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.920853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.920879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.920959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.920985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.921100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.921126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.921270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.668 [2024-11-19 16:42:21.921296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.668 qpair failed and we were unable to recover it. 00:36:31.668 [2024-11-19 16:42:21.921391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.921417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.921533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.921561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.921673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.921699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.921813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.921838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.921957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.921982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.922075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.922102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.922190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.922215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.922302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.922328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.922418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.922443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.922558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.922584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.922675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.922702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.922815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.922841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.922960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.922986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.923080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.923108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.923250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.923276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.923393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.923418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.923560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.923585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.923733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.923758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.923867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.923893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.924006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.924034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.924193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.924220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.924329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.924354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.924468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.924494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.924586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.924611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.924750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.924776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.924857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.924883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.924967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.924997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.925147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.925185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.925340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.925367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.925474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.925499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.925609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.925635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.925723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.925748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.925833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.925859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.925996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.926021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.926105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.926131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.926235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.926260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.926375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.926402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.926491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.669 [2024-11-19 16:42:21.926518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.669 qpair failed and we were unable to recover it. 00:36:31.669 [2024-11-19 16:42:21.926658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.926684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.926774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.926802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.926900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.926926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.927040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.927066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.927216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.927242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.927332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.927359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.927436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.927461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.927558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.927585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.927738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.927777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.927898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.927926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.928044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.928076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.928192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.928219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.928327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.928353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.928446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.928471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.928581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.928607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.928720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.928750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.928863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.928888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.928978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.929004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.929115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.929141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.929224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.929249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.929330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.929357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.929442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.929468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.929581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.929608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.929682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.929707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.929788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.929814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.929938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.929976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.930077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.930103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.930211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.930237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.930317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.930342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.930469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.930494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.930637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.930662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.930749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.930774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.930891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.930916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.931027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.931052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.931179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.931205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.931282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.931307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.931392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.931417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.670 qpair failed and we were unable to recover it. 00:36:31.670 [2024-11-19 16:42:21.931425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:31.670 [2024-11-19 16:42:21.931533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.670 [2024-11-19 16:42:21.931561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.931655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.931682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.931772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.931798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.931881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.931907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.931999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.932024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.932121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.932148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.932263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.932289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.932380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.932405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.932521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.932546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.932627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.932652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.932772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.932803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.932892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.932919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.933026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.933053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.933186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.933212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.933299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.933325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.933467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.933494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.933586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.933613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.933699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.933724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.933806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.933836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.933923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.933948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.934065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.934096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.934180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.934206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.934323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.934348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.934425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.934451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.934532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.934558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.934673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.934699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.934811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.934837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.934943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.934968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.935079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.935105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.935216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.935241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.935326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.671 [2024-11-19 16:42:21.935352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.671 qpair failed and we were unable to recover it. 00:36:31.671 [2024-11-19 16:42:21.935441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.935470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.935597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.935623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.935707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.935732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.935868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.935893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.936006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.936031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.936132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.936158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.936297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.936322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.936418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.936445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.936554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.936580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.936657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.936683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.936807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.936833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.936944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.936970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.937086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.937112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.937221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.937247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.937355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.937385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.937504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.937531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.937672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.937697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.937784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.937809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.937923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.937948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.938095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.938122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.938239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.938265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.938343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.938369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.938456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.938483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.938578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.938605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.938717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.938744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.938834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.938860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.939013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.939052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.939155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.939181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.939279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.939305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.939414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.939441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.939549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.939576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.939721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.939747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.939891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.939917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.940043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.940088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.940218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.940245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.940365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.940395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.940503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.940529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.940618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.672 [2024-11-19 16:42:21.940643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.672 qpair failed and we were unable to recover it. 00:36:31.672 [2024-11-19 16:42:21.940756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.940782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.940884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.940923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.941049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.941094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.941196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.941224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.941341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.941367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.941494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.941520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.941600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.941627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.941732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.941759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.941849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.941876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.941977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.942005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.942144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.942171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.942283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.942309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.942425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.942451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.942601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.942627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.942743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.942769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.942858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.942884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.942980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.943008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.673 qpair failed and we were unable to recover it. 00:36:31.673 [2024-11-19 16:42:21.943127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.673 [2024-11-19 16:42:21.943154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.954 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.943244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.943273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.943385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.943412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.943555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.943582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.943674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.943701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.943832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.943858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.943975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.944001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.944125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.944153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.944276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.944304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.944416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.944441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.944588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.944614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.944698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.944724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.944807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.944833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.944918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.944944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.945026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.945054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.945177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.945204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.945319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.945345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.945459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.945487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.945604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.945630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.945745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.945771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.945862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.945888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.946005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.946036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.946134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.946163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.946266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.946292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.946434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.946460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.946574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.946599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.946688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.946719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.946801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.946827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.946918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.946944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.947060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.947095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.947216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.947242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.947358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.955 [2024-11-19 16:42:21.947385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.955 qpair failed and we were unable to recover it. 00:36:31.955 [2024-11-19 16:42:21.947479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.947505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.947589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.947615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.947703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.947731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.947820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.947846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.947958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.947984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.948095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.948122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.948239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.948267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.948352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.948378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.948501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.948528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.948613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.948639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.948748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.948774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.948858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.948884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.948994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.949020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.949133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.949159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.949239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.949265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.949352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.949384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.949491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.949516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.949605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.949631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.949710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.949735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.949820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.949846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.949959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.949984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.950065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.950104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.950222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.950248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.950367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.950393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.950471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.950497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.950617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.950642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.950756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.950782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.950866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.950892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.951006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.951031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.951257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.951284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.951396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.951422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.951514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.951540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.951660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.951686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.951799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.956 [2024-11-19 16:42:21.951824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.956 qpair failed and we were unable to recover it. 00:36:31.956 [2024-11-19 16:42:21.951942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.951968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.952055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.952089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.952179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.952204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.952319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.952345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.952486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.952512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.952593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.952618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.952729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.952755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.952834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.952860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.952975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.953001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.953143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.953169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.953274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.953300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.953389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.953415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.953496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.953521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.953635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.953661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.953737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.953767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.953859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.953885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.953967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.953993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.954118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.954158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.954283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.954311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.954454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.954480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.954593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.954620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.954736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.954762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.954890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.954928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.955027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.955054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.955178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.955204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.955292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.955318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.955394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.955420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.955534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.955560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.955659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.955695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.955852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.955881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.955964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.955991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.956083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.956111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.956228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.956255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.956398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.956424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.956513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.956539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.956659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.956686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.956799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.956824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.956965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.957 [2024-11-19 16:42:21.956990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.957 qpair failed and we were unable to recover it. 00:36:31.957 [2024-11-19 16:42:21.957111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.957137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.957225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.957251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.957365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.957390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.957537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.957572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.957691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.957717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.957801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.957826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.957939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.957965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.958103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.958129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.958257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.958296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.958447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.958474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.958588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.958614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.958738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.958766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.958937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.958975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.959078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.959106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.959202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.959229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.959369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.959394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.959505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.959531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.959656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.959683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.959791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.959817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.959900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.959926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.960005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.960032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.960150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.960177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.960270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.960296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.960380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.960405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.960547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.960572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.960661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.960687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.960799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.960825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.960937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.960962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.961074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.961101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.961188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.961214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.961345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.961393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.961508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.961536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.961628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.961662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.961776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.961803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.961924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.961950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.962064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.962100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.962198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.962225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.962317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.958 [2024-11-19 16:42:21.962343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.958 qpair failed and we were unable to recover it. 00:36:31.958 [2024-11-19 16:42:21.962427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.962453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.962568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.962594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.962679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.962705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.962806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.962844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.962968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.962995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.963081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.963109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.963256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.963282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.963395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.963421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.963507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.963533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.963673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.963699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.963812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.963840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.963928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.963954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.964038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.964065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.964160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.964186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.964300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.964326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.964439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.964465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.964547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.964573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.964690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.964716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.964798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.964823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.964938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.964970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.965116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.965143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.965261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.965286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.965400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.965426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.965510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.965535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.965617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.965645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.965784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.965810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.965936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.965962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.966090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.966116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.966232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.966257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.966370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.966395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.966494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.966521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.959 [2024-11-19 16:42:21.966669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.959 [2024-11-19 16:42:21.966697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.959 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.966789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.966815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.966913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.966939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.967022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.967048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.967178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.967204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.967346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.967373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.967459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.967486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.967628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.967654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.967761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.967786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.967867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.967893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.968010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.968035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.968125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.968151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.968263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.968288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.968403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.968430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.968518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.968543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.968658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.968683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.968797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.968822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.968932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.968958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.969077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.969105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.969219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.969244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.969335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.969363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.969471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.969497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.969613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.969639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.969751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.969777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.969905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.969944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.970065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.970099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.970218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.970244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.970352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.970378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.970462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.970493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.970584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.970612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.970723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.970750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.970836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.970861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.970978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.971004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.971090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.971117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.971228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.971253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.971368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.971393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.971477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.971503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.971594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.971620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.960 [2024-11-19 16:42:21.971731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.960 [2024-11-19 16:42:21.971757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.960 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.971845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.971872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.971951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.971978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.972075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.972103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.972202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.972229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.972316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.972343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.972457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.972482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.972601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.972628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.972723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.972749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.972832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.972859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.972947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.972972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.973055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.973088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.973199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.973225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.973330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.973356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.973440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.973465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.973579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.973607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.973699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.973727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.973847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.973877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.973959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.973985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.974103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.974129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.974242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.974267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.974380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.974406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.974522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.974547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.974626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.974651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.974767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.974792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.974925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.974964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.975075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.975102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.975221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.975248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.975364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.975390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.975506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.975532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.975619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.975644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.975796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.975822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.975910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.975935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.976052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.976083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.976170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.976196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.976307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.976333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.976411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.976437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.976531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.976557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.961 [2024-11-19 16:42:21.976641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.961 [2024-11-19 16:42:21.976667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.961 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.976750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.976775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.976919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.976945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.977028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.977054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.977145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.977171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.977261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.977287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.977410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.977441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.977597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.977623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.977710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.977736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.977821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.977847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.977967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.977993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.978133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.978160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.978241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.978267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.978383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.978409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.978522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.978547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.978661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.978686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.978768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.978794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.978878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.978903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.978986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.979012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.979144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.979183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.979285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.979312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.979431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.979457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.979545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.979571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.979662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.979688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.979810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.979836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.979951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.979976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.980090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.980117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.980224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.980249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.980366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.980392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.980481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.980507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.980600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.980626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.980741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.980768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.980878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.980904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.981023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.981079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.981241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.981268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.981353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.981378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.981474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.981500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.981609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.981635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.962 [2024-11-19 16:42:21.981751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.962 [2024-11-19 16:42:21.981776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.962 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.981891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.981917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.982003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.982029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.982141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.982179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.982311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.982338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.982456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.982482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.982586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.982611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.982765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.982794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.982884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.982910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.983064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.983098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.983182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.983208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.983298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.983324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.983431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.983457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.983537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.983562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.983675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.983700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.983796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.983835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.983933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.983961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.984103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.984131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.984245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.984271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.984384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.984410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.984522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.984549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.984646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.984672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.984799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.984838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.984964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.984991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.985101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.985128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.985242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.985268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.985354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.985379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.985453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.985479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.985565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.985592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.985678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.985704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.985785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.985811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.985892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.985917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.986002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.986028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.986120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.986146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.986228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.986254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.986331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.986366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.986451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.986477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.986570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.986598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.963 qpair failed and we were unable to recover it. 00:36:31.963 [2024-11-19 16:42:21.986712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.963 [2024-11-19 16:42:21.986738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.986743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:31.964 [2024-11-19 16:42:21.986784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:31.964 [2024-11-19 16:42:21.986811] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:31.964 [2024-11-19 16:42:21.986833] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:31.964 [2024-11-19 16:42:21.986843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.986853] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:31.964 [2024-11-19 16:42:21.986868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.987013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.987039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.987162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.987188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.987336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.987361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.987442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.987467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.987555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.987581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.987665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.987695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.987784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.987812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.987937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.987963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.988054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.988089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.988176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.988201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.988293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.988320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.988469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.988496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.988608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.988634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.988745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.988772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.988863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.988888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.988965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.988991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.988995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:36:31.964 [2024-11-19 16:42:21.989099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.989031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:36:31.964 [2024-11-19 16:42:21.989126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.989085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:36:31.964 [2024-11-19 16:42:21.989092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:36:31.964 [2024-11-19 16:42:21.989266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.989290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.989372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.989398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.989481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.989512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.989606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.989631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.989725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.989750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.989843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.989869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.989958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.989986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.990098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.990124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.990217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.990245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.990336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.990362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.990480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.990507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.964 qpair failed and we were unable to recover it. 00:36:31.964 [2024-11-19 16:42:21.990590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.964 [2024-11-19 16:42:21.990616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.990696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.990721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.990817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.990855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.990945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.990972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.991064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.991097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.991197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.991223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.991332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.991357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.991441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.991466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.991543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.991568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.991676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.991701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.991801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.991840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.991920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.991948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.992038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.992064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.992154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.992181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.992268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.992294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.992381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.992408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.992520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.992546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.992635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.992664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.992759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.992785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.992859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.992885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.992964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.992990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.993084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.993110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.993198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.993224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.993342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.993371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.993447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.993472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.993560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.993586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.993669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.993695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.993801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.993826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.993937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.993965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.994040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.994066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.994191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.994217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.994303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.994334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.994439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.994465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.994571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.994597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.994685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.994712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.994811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.994851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.994955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.994984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.995102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.995130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.965 qpair failed and we were unable to recover it. 00:36:31.965 [2024-11-19 16:42:21.995212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.965 [2024-11-19 16:42:21.995238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.995319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.995345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.995449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.995475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.995571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.995598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.995715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.995742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.995831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.995859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.995946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.995972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.996057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.996092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.996171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.996196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.996277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.996305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.996398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.996424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.996535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.996562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.996635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.996662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.996746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.996772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.996847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.996873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.996986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.997014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.997151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.997179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.997270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.997297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.997392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.997418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.997495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.997521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.997610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.997645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.997762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.997788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.997879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.997904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.997979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.998004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.998144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.998170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.998257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.998283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.998361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.998387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.998481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.998509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.998587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.998612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.998697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.998723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.998808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.998833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.998952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.998978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.999076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.999104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.999191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.999218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.999342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.999368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.999462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.999487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.999576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.999602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.999690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.966 [2024-11-19 16:42:21.999716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.966 qpair failed and we were unable to recover it. 00:36:31.966 [2024-11-19 16:42:21.999828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:21.999853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:21.999926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:21.999952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.000063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.000095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.000169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.000195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.000283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.000310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.000394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.000420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.000532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.000561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.000649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.000675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.000788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.000815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.000895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.000926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.001007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.001034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.001128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.001155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.001241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.001272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.001373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.001402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.001484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.001510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.001593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.001620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.001702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.001728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.001821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.001847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.001962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.001989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.002084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.002113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.002202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.002228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.002321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.002347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.002430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.002456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.002543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.002568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.002683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.002708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.002786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.002812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.002896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.002922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.003010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.003036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.003149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.003176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.003266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.003292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.003401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.003427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.003508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.003534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.003614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.003640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.003732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.003760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.003852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.003882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.004001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.004028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.004129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.004156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.004236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.967 [2024-11-19 16:42:22.004262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.967 qpair failed and we were unable to recover it. 00:36:31.967 [2024-11-19 16:42:22.004375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.004400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.004517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.004542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.004623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.004650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.004762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.004789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.004863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.004889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.004962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.004988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.005093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.005119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.005200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.005226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.005323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.005349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.005463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.005489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.005577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.005604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.005722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.005749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.005848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.005874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.005963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.005991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.006115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.006143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.006236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.006262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.006371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.006397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.006515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.006541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.006627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.006653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.006735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.006760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.006877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.006903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.006981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.007007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.007101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.007128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.007213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.007239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.007319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.007344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.007428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.007456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.007535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.007562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.007685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.007712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.007823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.007849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.007968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.007994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.008092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.008119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.008209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.008236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.008334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.008374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.008461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.008489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.008607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.008635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.008729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.008755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.008841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.008869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.008962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.968 [2024-11-19 16:42:22.008990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.968 qpair failed and we were unable to recover it. 00:36:31.968 [2024-11-19 16:42:22.009095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.009127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.009233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.009259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.009343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.009369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.009459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.009485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.009591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.009617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.009698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.009726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.009844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.009870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.009988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.010015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.010104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.010130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.010245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.010273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.010385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.010412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.010492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.010520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.010609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.010638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.010751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.010777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.010873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.010899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.010995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.011020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.011108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.011135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.011228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.011254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.011346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.011372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.011486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.011512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.011598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.011626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.011777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.011804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.011887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.011913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.012030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.012056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.012177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.012203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.012299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.012324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.012417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.012443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.012533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.012566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.012679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.012705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.012818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.012845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.012950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.012975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.013054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.969 [2024-11-19 16:42:22.013085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.969 qpair failed and we were unable to recover it. 00:36:31.969 [2024-11-19 16:42:22.013175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.013202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.013320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.013347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.013432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.013458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.013582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.013608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.013721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.013747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.013828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.013854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.013943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.013971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.014115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.014142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.014233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.014259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.014372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.014398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.014490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.014517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.014610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.014636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.014717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.014747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.014868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.014894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.014978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.015004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.015089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.015115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.015229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.015254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.015367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.015393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.015481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.015507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.015597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.015623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.015698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.015724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.015805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.015833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.015921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.015951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.016041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.016076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.016224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.016250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.016367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.016393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.016486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.016512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.016601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.016627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.016747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.016773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.016859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.016885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.016978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.017005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.017121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.017161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.017243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.017272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.017389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.017416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.017533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.017561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.017646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.017672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.017756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.017783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.017869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.970 [2024-11-19 16:42:22.017896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.970 qpair failed and we were unable to recover it. 00:36:31.970 [2024-11-19 16:42:22.018012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.018038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.018171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.018198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.018280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.018306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.018413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.018439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.018519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.018544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.018624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.018650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.018744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.018770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.018852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.018878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.018999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.019028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.019164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.019190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.019304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.019331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.019453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.019479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.019562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.019588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.019676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.019702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.019781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.019807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.019888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.019914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.020051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.020082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.020163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.020189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.020277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.020304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.020401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.020427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.020540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.020566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.020643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.020669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.020751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.020777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.020884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.020910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.020993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.021021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.021150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.021176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.021269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.021295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.021403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.021428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.021508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.021534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.021624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.021650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.021741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.021767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.021855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.021893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.022013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.022041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.022164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.022191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.022273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.022298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.022387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.022414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.022498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.022524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.022611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.971 [2024-11-19 16:42:22.022637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.971 qpair failed and we were unable to recover it. 00:36:31.971 [2024-11-19 16:42:22.022717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.022744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.022829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.022855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.022961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.022988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.023078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.023113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.023193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.023218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.023324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.023350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.023441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.023467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.023552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.023578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.023714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.023740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.023821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.023847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.023939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.023965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.024050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.024082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.024176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.024206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.024343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.024369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.024483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.024509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.024622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.024648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.024733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.024758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.024848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.024874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.024958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.024984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.025063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.025094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.025178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.025204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.025280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.025305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.025390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.025416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.025501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.025526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.025609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.025635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.025722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.025747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.025830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.025858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.025982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.026009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.026096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.026123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.026216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.026243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.026360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.026393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.026475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.026501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.026587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.026614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.026700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.026726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.026834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.026861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.026944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.026970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.027058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.027092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.027184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.972 [2024-11-19 16:42:22.027210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.972 qpair failed and we were unable to recover it. 00:36:31.972 [2024-11-19 16:42:22.027287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.027313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.027434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.027460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.027537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.027563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.027682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.027708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.027789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.027815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.027966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.027991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.028075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.028101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.028181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.028206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.028284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.028310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.028396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.028422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.028502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.028527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.028605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.028631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.028763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.028789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.028888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.028914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.028995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.029022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.029126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.029155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.029271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.029297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.029388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.029415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.029504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.029530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.029617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.029643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.029726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.029752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.029849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.029876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.030002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.030028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.030138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.030165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.030247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.030273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.030358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.030384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.030464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.030498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.030589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.030617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.030703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.030730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.030849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.030876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.030978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.031006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.031108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.031135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.031251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.973 [2024-11-19 16:42:22.031277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.973 qpair failed and we were unable to recover it. 00:36:31.973 [2024-11-19 16:42:22.031364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.031391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.031478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.031504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.031577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.031603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.031686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.031712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.031811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.031837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.031956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.031982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.032074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.032102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.032217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.032243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.032321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.032347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.032458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.032484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.032603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.032629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.032716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.032742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.032827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.032854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.032944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.032971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.033044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.033084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.033178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.033204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.033291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.033318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.033438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.033464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.033544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.033571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.033661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.033688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.033805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.033831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.033929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.033955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.034047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.034091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.034211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.034242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.034332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.034359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.034450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.034477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.034557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.034583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.034670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.034697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.034790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.034816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.034903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.034929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.035012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.035037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.035139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.035165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.035254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.035279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.035359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.035384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.035464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.035490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.035572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.035599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.035689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.035717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.035811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.974 [2024-11-19 16:42:22.035838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.974 qpair failed and we were unable to recover it. 00:36:31.974 [2024-11-19 16:42:22.035921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.035947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.036054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.036086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.036170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.036197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.036281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.036307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.036397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.036423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.036562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.036588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.036669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.036694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.036774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.036801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.036891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.036917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.037004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.037030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.037126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.037152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.037264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.037290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.037379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.037413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.037502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.037529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.037617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.037644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.037729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.037755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.037843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.037869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.037947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.037972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.038075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.038102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.038186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.038212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.038292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.038318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.038413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.038439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.038526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.038553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.038664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.038689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.038768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.038794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.038879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.038904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.039029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.039054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.039136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.039161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.039245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.039270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.039376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.039401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.039483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.039508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.039595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.039621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.039728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.039754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.039829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.039855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.039974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.040001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.040084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.040111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.040196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.040225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.040342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.975 [2024-11-19 16:42:22.040369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.975 qpair failed and we were unable to recover it. 00:36:31.975 [2024-11-19 16:42:22.040448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.040473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.040580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.040611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.040692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.040718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.040834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.040860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.040940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.040967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.041061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.041092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.041185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.041211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.041291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.041316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.041405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.041431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.041545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.041571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.041651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.041677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.041795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.041822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.041965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.041991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.042369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.042399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.042485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.042511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.042623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.042650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.042743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.042769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.042849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.042876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.042992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.043018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.043110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.043136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.043224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.043250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.043341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.043367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.043454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.043480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.043581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.043607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.043694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.043722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.043838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.043864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.043954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.043980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.044061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.044094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.044176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.044202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.044292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.044318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.044412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.044440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.044526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.044551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.044639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.044666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.044743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.044769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.044862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.044888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.044970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.044996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.045101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.045127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.976 [2024-11-19 16:42:22.045221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.976 [2024-11-19 16:42:22.045246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.976 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.045362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.045388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.045485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.045511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.045626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.045652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.045742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.045773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.045856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.045883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.045965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.045991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.046089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.046115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.046205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.046231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.046332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.046358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.046442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.046469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.046557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.046583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.046670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.046697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.046779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.046804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.046887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.046913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.047029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.047055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.047158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.047184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.047269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.047295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.047381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.047408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.047496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.047522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.047637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.047663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.047742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.047768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.047909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.047935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.048028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.048053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.048144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.048171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.048260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.048286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.048378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.048403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.048493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.048519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.048609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.048635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.048722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.048749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.048839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.048866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.048957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.048983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.049062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.049093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.049183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.049209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.049284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.049309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.049394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.049420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.049505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.049530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.049624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.977 [2024-11-19 16:42:22.049650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.977 qpair failed and we were unable to recover it. 00:36:31.977 [2024-11-19 16:42:22.049727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.049753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.049829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.049855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.049955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.049980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.050067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.050098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.050187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.050213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.050304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.050330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.050416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.050446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.050539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.050565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.050649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.050675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.050763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.050790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.050874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.050901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.050991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.051016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.051109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.051136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.051230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.051255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.051362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.051389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.051464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.051490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.051572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.051598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.051674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.051700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.051797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.051823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.051907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.051933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.052015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.052042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.052151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.052177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.052263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.052290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.052410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.052435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.052520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.052546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.052633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.052659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.052744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.052770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.052846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.052872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.052957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.052983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.053061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.053094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.053183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.053209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.053294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.053320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.053392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.053418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.978 qpair failed and we were unable to recover it. 00:36:31.978 [2024-11-19 16:42:22.053504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.978 [2024-11-19 16:42:22.053530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.053620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.053646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.053764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.053789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.053873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.053900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.053984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.054010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.054097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.054124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.054206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.054232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.054347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.054373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.054466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.054492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.054569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.054595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.054687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.054712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.054801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.054827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.054907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.054933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.055015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.055045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.055171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.055213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.055338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.055377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.055479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.055507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.055586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.055613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.055695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.055723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.055834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.055861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.055956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.055982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.056094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.056121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.056209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.056235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.056316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.056341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.056451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.056477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.056591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.056618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.056695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.056721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.056821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.056854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.056950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.056978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.057083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.057112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.057198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.057225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.057321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.057348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.057432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.057459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.057573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.057600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.057720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.057746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.057841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.057869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.057984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.058011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.058109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.058136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.979 qpair failed and we were unable to recover it. 00:36:31.979 [2024-11-19 16:42:22.058253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.979 [2024-11-19 16:42:22.058279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.058362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.058388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.058477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.058511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.058602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.058628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.058711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.058737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.058834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.058861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.058952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.058978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.059062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.059095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.059185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.059211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.059346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.059372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.059457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.059483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.059582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.059608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.059696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.059722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.059852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.059891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.059982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.060010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.060132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.060161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.060254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.060280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.060366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.060393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.060482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.060509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.060592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.060619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.060722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.060748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.060833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.060862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.060937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.060963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.061046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.061079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.061166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.061192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.061275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.061301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.061398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.061425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.061515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.061541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.061634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.061662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.061749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.061782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.061867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.061893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.061980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.062007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.062111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.062138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.062230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.062256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.062341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.062368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.062473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.062499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.062607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.062634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.062721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.980 [2024-11-19 16:42:22.062748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.980 qpair failed and we were unable to recover it. 00:36:31.980 [2024-11-19 16:42:22.062865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.062890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.062978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.063004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.063090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.063117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.063210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.063235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.063321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.063347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.063466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.063494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.063578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.063606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.063696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.063722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.063806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.063834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.063958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.063985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.064078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.064105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.064187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.064214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.064302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.064328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.064443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.064468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.064557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.064583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.064674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.064700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.064788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.064815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.064896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.064923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.065013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.065046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.065155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.065195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.065324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.065351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.065445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.065472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.065560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.065588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.065682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.065709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.065796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.065822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.065909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.065937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.066047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.066080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.066172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.066198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.066282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.066309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.066395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.066422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.066505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.066532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.066657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.066683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.066792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.066819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.066919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.066945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.067035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.067062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.067146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.067174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.067269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.067295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.067377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.981 [2024-11-19 16:42:22.067403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.981 qpair failed and we were unable to recover it. 00:36:31.981 [2024-11-19 16:42:22.067488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.067515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.067634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.067660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.067750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.067776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.067866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.067906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.068003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.068031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.068155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.068182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.068297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.068324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.068422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.068449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.068537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.068563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.068649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.068676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.068761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.068787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.068867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.068893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.069007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.069033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.069127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.069153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.069243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.069269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.069383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.069409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.069505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.069531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.069615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.069643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.069730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.069757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.069852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.069878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.069965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.069996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.070090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.070117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.070211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.070240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.070329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.070355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.070446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.070472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.070580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.070620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.070709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.070737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.070824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.070849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.070938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.070964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.071045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.071077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.071166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.071191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.071290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.071317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.982 [2024-11-19 16:42:22.071396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.982 [2024-11-19 16:42:22.071422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.982 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.071497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.071522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.071619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.071646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.071726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.071752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.071861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.071887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.071977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.072002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.072082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.072108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.072193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.072219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.072298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.072324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.072408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.072434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.072509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.072535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.072616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.072642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.072761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.072787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.072867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.072893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.072986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.073014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.073118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.073156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.073250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.073278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.073370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.073396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.073516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.073542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.073637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.073664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.073754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.073781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.073879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.073919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.074043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.074081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.074168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.074195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.074291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.074319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.074417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.074443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.074521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.074547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.074635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.074662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.074747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.074773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.074872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.074901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.075020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.075046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.075171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.075198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.075291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.075317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.075403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.075429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.075512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.075538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.075625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.075651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.075734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.075760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.075856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.075881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.983 [2024-11-19 16:42:22.075975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.983 [2024-11-19 16:42:22.076002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.983 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.076111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.076150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.076248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.076276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.076392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.076420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.076540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.076566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.076647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.076674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.076757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.076784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.076882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.076921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.077018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.077046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.077151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.077179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.077272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.077297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.077425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.077452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.077538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.077563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.077647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.077674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.077791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.077818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.077908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.077934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.078026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.078053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.078158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.078192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.078283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.078309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.078385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.078411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.078489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.078515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.078608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.078634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.078712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.078739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.078819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.078845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.078924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.078951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.079063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.079098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.079195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.079221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.079347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.079373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.079494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.079521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.079609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.079637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.079721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.079747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.079829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.079856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.079970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.079997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.080086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.080125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.080221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.080248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.080325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.080351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.080435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.080461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.080552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.080578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.984 qpair failed and we were unable to recover it. 00:36:31.984 [2024-11-19 16:42:22.080659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.984 [2024-11-19 16:42:22.080685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.080765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.080792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.080877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.080903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.080982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.081008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.081097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.081125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.081212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.081239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.081380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.081412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.081488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.081514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.081625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.081664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.081753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.081780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.081903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.081929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.082016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.082042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.082147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.082173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.082256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.082282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.082373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.082399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.082484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.082510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.082595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.082621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.082740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.082768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.082859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.082885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.083026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.083052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.083150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.083177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.083267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.083293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.083377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.083404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.083493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.083521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.083656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.083682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.083765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.083791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.083866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.083893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.083983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.084009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.084110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.084136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.084214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.084240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.084332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.084358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.084470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.084496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.084577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.084605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.084718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.084756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feecc000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.084869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.084909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.085005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.085032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.085141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.085169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.985 [2024-11-19 16:42:22.085253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.985 [2024-11-19 16:42:22.085279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.985 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.085361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.085388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.085512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.085539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.085631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.085657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.085768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.085794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.085871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.085897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.086011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.086038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.086139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.086167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.086258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.086285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.086369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.086395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.086480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.086505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.086599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.086624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.086709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.086733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.086840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.086864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.086937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.086961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.087044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.087078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.087173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.087198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.087279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.087304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.087418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.087443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.087580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.087605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.087721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.087747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.087835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.087860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.087950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.087986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.088081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.088108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.088186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.088212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.088294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.088319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.088411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.088437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.088547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.088572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.088681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.088706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.088789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.088816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.088913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.088938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.089057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.089087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.089177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.089202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.089326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.089351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.089463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.089488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.089603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.089628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.089715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.089749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.089841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.986 [2024-11-19 16:42:22.089865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.986 qpair failed and we were unable to recover it. 00:36:31.986 [2024-11-19 16:42:22.089946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.089971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.090048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.090080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.090172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.090197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.090289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.090314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.090403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.090429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.090516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.090543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.090626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.090653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.090743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.090771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.090891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.090917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.091029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.091055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.091154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.091180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.091268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.091294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.091387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.091413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.091521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.091547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.091641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.091680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.091775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.091802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.091894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.091920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.091993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.092019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.092133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.092162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.092252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.092277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.092364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.092390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.092474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.092500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.092580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.092605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.092714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.092740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.092828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.092854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.092948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.092974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.093061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.093093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.093177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.093203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.093297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.093323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.093404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.093429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.093545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.093571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.093652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.093678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.093786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.093826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.987 [2024-11-19 16:42:22.093928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.987 [2024-11-19 16:42:22.093956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.987 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.094031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.094057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.094149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.094176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.094261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.094287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.094370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.094396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.094491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.094522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.094605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.094631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.094715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.094741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.094830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.094857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.094950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.094979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.095078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.095106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.095218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.095244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.095330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.095356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.095433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.095459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.095532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.095559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.095652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.095679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.095771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.095797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.095887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.095913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.096005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.096031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.096117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.096143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.096223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.096249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.096330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.096356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.096448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.096474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.096560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.096586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.096667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.096693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.096801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.096827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.096932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.096960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.097177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.097205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.097297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.097324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.097412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.097438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.097527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.097554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.097642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.097669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.097766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.097798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.097887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.097913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.098002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.098028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.098126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.098152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.098256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.098282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.098398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.098423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.988 qpair failed and we were unable to recover it. 00:36:31.988 [2024-11-19 16:42:22.098514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.988 [2024-11-19 16:42:22.098541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.098639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.098667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.098758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.098786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.098872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.098898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.098982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.099009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.099102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.099130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.099246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.099273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.099356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.099383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.099485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.099512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.099593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.099621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.099709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.099736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.099845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.099871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.099957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.099983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.100067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.100099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.100187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.100214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.100302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.100330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.100405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.100431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.100513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.100540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.100631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.100657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.100767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.100794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.100882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.100908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.101022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.101049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.101136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.101164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.101259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.101285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.101374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.101400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.101486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.101511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.101602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.101628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.101704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.101730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.101833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.101860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.101945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.101973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.102056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.102095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.102180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.102207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.102292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.102317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.102403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.102428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.102509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.102535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.102620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.102646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.102729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.102757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.102840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.989 [2024-11-19 16:42:22.102866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.989 qpair failed and we were unable to recover it. 00:36:31.989 [2024-11-19 16:42:22.102959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.102985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.103060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.103093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.103213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.103239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.103350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.103376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.103474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.103501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.103584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.103610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.103694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.103719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.103797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.103823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.103907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.103933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.104018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.104044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.104135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.104163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.104249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.104275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.104352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.104379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.104489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.104515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.104602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.104627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.104740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.104767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.104853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.104880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.104969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.104997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.105082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.105109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.105215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.105241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.105321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.105347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.105438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.105464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.105551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.105578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.105682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.105713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.105799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.105824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.105969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.105995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.106086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.106112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.106201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.106227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.106301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.106327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.106411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.106436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.106521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.106547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.106624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.106649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.106768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.106807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.106901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.106929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.107023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.107050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.107150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.107177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.107276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.107302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.107385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.107411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.990 qpair failed and we were unable to recover it. 00:36:31.990 [2024-11-19 16:42:22.107503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.990 [2024-11-19 16:42:22.107528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.107608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.107633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.107724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.107751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.107860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.107899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.107989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.108016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.108116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.108146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.108234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.108260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.108341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.108368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.108487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.108513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.108604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.108630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.108727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.108753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.108841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.108867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.108940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.108970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.109051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.109083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.109175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.109201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.109287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.109313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.109401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.109427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.109513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.109539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.109655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.109683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.109766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.109793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.109917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.109955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.110047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.110079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.110162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.110188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.110305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.110330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.110405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.110431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.110511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.110536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.110622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.110647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.110738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.110764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.110850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.110875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.110969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.110997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.111122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.111152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.111250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.111276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.111365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.111392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.111479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.111505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.111589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.111616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.111695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.111722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.111802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.111827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.111911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.111937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.991 qpair failed and we were unable to recover it. 00:36:31.991 [2024-11-19 16:42:22.112024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.991 [2024-11-19 16:42:22.112049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.112144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.112178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.112271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.112297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.112381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.112408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.112496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.112521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.112608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.112635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.112722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.112748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.112840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.112866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.112954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.112980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.113062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.113098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.113179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.113206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.113316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.113343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.113428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.113454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.113559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.113585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.113710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.113736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.113856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.113883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.113977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.114004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.114090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.114117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.114234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.114261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.114357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.114383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.114467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.114493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.114580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.114608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.114691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.114717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.114797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.114823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.114915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.114941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.115057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.115091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.115177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.115203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.115288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.115314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.115392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.115424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.115511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.115537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.115617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.115643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.115727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.115753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.115868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.992 [2024-11-19 16:42:22.115894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.992 qpair failed and we were unable to recover it. 00:36:31.992 [2024-11-19 16:42:22.115970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.115996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.116081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.116107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.116186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.116212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.116293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.116318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.116415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.116441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.116526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.116552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.116643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.116671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.116751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.116779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.116869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.116896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.116993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.117020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.117103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.117130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.117208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.117235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.117357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.117384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.117463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.117488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.117567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.117592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.117681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.117707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.117789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.117817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.117900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.117926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.118020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.118047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.118157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.118184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.118272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.118298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.118378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.118405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.118489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.118516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.118616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.118643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.118762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.118788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.118877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.118902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.118980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.119006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.119101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.119128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.119216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.119242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.119329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.119354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.119436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.119462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.119547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.119573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.119686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.119711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.119792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.119819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.119909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.119937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.120021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.120048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.120158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.120185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.120274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.993 [2024-11-19 16:42:22.120300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.993 qpair failed and we were unable to recover it. 00:36:31.993 [2024-11-19 16:42:22.120413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.120440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.120524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.120551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.120634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.120660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.120741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.120766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.120848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.120874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.120952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.120977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.121061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.121095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.121209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.121234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.121322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.121347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.121428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.121454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.121572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.121601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.121718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.121746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.121831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.121857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.121940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.121966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.122057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.122089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.122178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.122204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.122291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.122317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.122439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.122465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.122556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.122582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.122668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.122693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.122785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.122810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.122894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.122920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.123001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.123027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.123125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.123164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.123257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.123285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.123374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.123401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.123489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.123515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.123601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.123628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.123711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.123738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.123826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.123852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.123932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.123957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.124034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.124060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.124164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.124190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.124276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.124301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 A controller has encountered a failure and is being reset. 00:36:31.994 [2024-11-19 16:42:22.124402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.124430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.124514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.124541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.124654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.124681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.124766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.994 [2024-11-19 16:42:22.124793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.994 qpair failed and we were unable to recover it. 00:36:31.994 [2024-11-19 16:42:22.124888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.124915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed4000b90 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.125008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.125036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.125132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.125158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.125240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.125265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.125354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.125380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.125462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.125487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.125628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.125654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7feed8000b90 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.125740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.125767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.125852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.125878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.125959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.125984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.126063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.126096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.126178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.126204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1443b40 with addr=10.0.0.2, port=4420 00:36:31.995 qpair failed and we were unable to recover it. 00:36:31.995 [2024-11-19 16:42:22.126306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.995 [2024-11-19 16:42:22.126343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1451970 with addr=10.0.0.2, port=4420 00:36:31.995 [2024-11-19 16:42:22.126361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1451970 is same with the state(6) to be set 00:36:31.995 [2024-11-19 16:42:22.126391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1451970 (9): Bad file descriptor 00:36:31.995 [2024-11-19 16:42:22.126410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:36:31.995 [2024-11-19 16:42:22.126433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:36:31.995 [2024-11-19 16:42:22.126449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:36:31.995 Unable to reset the controller. 00:36:31.995 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:31.995 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:31.995 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:31.995 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:31.995 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.256 Malloc0 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.256 [2024-11-19 16:42:22.314792] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.256 [2024-11-19 16:42:22.343111] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.256 16:42:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 403449 00:36:33.191 Controller properly reset. 00:36:38.465 Initializing NVMe Controllers 00:36:38.465 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:38.465 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:38.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:38.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:38.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:38.465 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:38.465 Initialization complete. Launching workers. 00:36:38.465 Starting thread on core 1 00:36:38.465 Starting thread on core 2 00:36:38.465 Starting thread on core 3 00:36:38.465 Starting thread on core 0 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:38.465 00:36:38.465 real 0m10.644s 00:36:38.465 user 0m33.400s 00:36:38.465 sys 0m7.717s 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.465 ************************************ 00:36:38.465 END TEST nvmf_target_disconnect_tc2 00:36:38.465 ************************************ 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:38.465 rmmod nvme_tcp 00:36:38.465 rmmod nvme_fabrics 00:36:38.465 rmmod nvme_keyring 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:38.465 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 403973 ']' 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 403973 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 403973 ']' 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 403973 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 403973 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 403973' 00:36:38.466 killing process with pid 403973 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 403973 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 403973 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:38.466 16:42:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.375 16:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:40.375 00:36:40.375 real 0m15.581s 00:36:40.375 user 0m58.850s 00:36:40.375 sys 0m10.240s 00:36:40.375 16:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:40.375 16:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:40.375 ************************************ 00:36:40.375 END TEST nvmf_target_disconnect 00:36:40.375 ************************************ 00:36:40.375 16:42:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:40.375 00:36:40.375 real 6m41.036s 00:36:40.375 user 17m22.483s 00:36:40.375 sys 1m29.710s 00:36:40.375 16:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:40.375 16:42:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:40.375 ************************************ 00:36:40.375 END TEST nvmf_host 00:36:40.375 ************************************ 00:36:40.375 16:42:30 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:40.375 16:42:30 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:40.375 16:42:30 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:40.375 16:42:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:40.375 16:42:30 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:40.375 16:42:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:40.375 ************************************ 00:36:40.375 START TEST nvmf_target_core_interrupt_mode 00:36:40.375 ************************************ 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:40.375 * Looking for test storage... 00:36:40.375 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:40.375 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:40.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.376 --rc genhtml_branch_coverage=1 00:36:40.376 --rc genhtml_function_coverage=1 00:36:40.376 --rc genhtml_legend=1 00:36:40.376 --rc geninfo_all_blocks=1 00:36:40.376 --rc geninfo_unexecuted_blocks=1 00:36:40.376 00:36:40.376 ' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:40.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.376 --rc genhtml_branch_coverage=1 00:36:40.376 --rc genhtml_function_coverage=1 00:36:40.376 --rc genhtml_legend=1 00:36:40.376 --rc geninfo_all_blocks=1 00:36:40.376 --rc geninfo_unexecuted_blocks=1 00:36:40.376 00:36:40.376 ' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:40.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.376 --rc genhtml_branch_coverage=1 00:36:40.376 --rc genhtml_function_coverage=1 00:36:40.376 --rc genhtml_legend=1 00:36:40.376 --rc geninfo_all_blocks=1 00:36:40.376 --rc geninfo_unexecuted_blocks=1 00:36:40.376 00:36:40.376 ' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:40.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.376 --rc genhtml_branch_coverage=1 00:36:40.376 --rc genhtml_function_coverage=1 00:36:40.376 --rc genhtml_legend=1 00:36:40.376 --rc geninfo_all_blocks=1 00:36:40.376 --rc geninfo_unexecuted_blocks=1 00:36:40.376 00:36:40.376 ' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:40.376 ************************************ 00:36:40.376 START TEST nvmf_abort 00:36:40.376 ************************************ 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:40.376 * Looking for test storage... 00:36:40.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:40.376 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:36:40.377 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:40.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.637 --rc genhtml_branch_coverage=1 00:36:40.637 --rc genhtml_function_coverage=1 00:36:40.637 --rc genhtml_legend=1 00:36:40.637 --rc geninfo_all_blocks=1 00:36:40.637 --rc geninfo_unexecuted_blocks=1 00:36:40.637 00:36:40.637 ' 00:36:40.637 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:40.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.638 --rc genhtml_branch_coverage=1 00:36:40.638 --rc genhtml_function_coverage=1 00:36:40.638 --rc genhtml_legend=1 00:36:40.638 --rc geninfo_all_blocks=1 00:36:40.638 --rc geninfo_unexecuted_blocks=1 00:36:40.638 00:36:40.638 ' 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:40.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.638 --rc genhtml_branch_coverage=1 00:36:40.638 --rc genhtml_function_coverage=1 00:36:40.638 --rc genhtml_legend=1 00:36:40.638 --rc geninfo_all_blocks=1 00:36:40.638 --rc geninfo_unexecuted_blocks=1 00:36:40.638 00:36:40.638 ' 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:40.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:40.638 --rc genhtml_branch_coverage=1 00:36:40.638 --rc genhtml_function_coverage=1 00:36:40.638 --rc genhtml_legend=1 00:36:40.638 --rc geninfo_all_blocks=1 00:36:40.638 --rc geninfo_unexecuted_blocks=1 00:36:40.638 00:36:40.638 ' 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:40.638 16:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:43.178 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:43.178 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:43.178 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:43.178 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:43.179 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:43.179 16:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:43.179 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:43.179 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:36:43.179 00:36:43.179 --- 10.0.0.2 ping statistics --- 00:36:43.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.179 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:43.179 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:43.179 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:36:43.179 00:36:43.179 --- 10.0.0.1 ping statistics --- 00:36:43.179 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:43.179 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=406766 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 406766 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 406766 ']' 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:43.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.179 [2024-11-19 16:42:33.214485] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:43.179 [2024-11-19 16:42:33.215546] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:36:43.179 [2024-11-19 16:42:33.215608] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:43.179 [2024-11-19 16:42:33.283820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:43.179 [2024-11-19 16:42:33.327112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:43.179 [2024-11-19 16:42:33.327166] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:43.179 [2024-11-19 16:42:33.327180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:43.179 [2024-11-19 16:42:33.327190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:43.179 [2024-11-19 16:42:33.327199] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:43.179 [2024-11-19 16:42:33.328597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:43.179 [2024-11-19 16:42:33.328663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:43.179 [2024-11-19 16:42:33.328659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:43.179 [2024-11-19 16:42:33.407782] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:43.179 [2024-11-19 16:42:33.407980] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:43.179 [2024-11-19 16:42:33.407985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:43.179 [2024-11-19 16:42:33.408276] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.179 [2024-11-19 16:42:33.465306] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.179 Malloc0 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.179 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.440 Delay0 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.440 [2024-11-19 16:42:33.537538] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:43.440 16:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:43.440 [2024-11-19 16:42:33.644965] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:46.032 Initializing NVMe Controllers 00:36:46.032 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:46.032 controller IO queue size 128 less than required 00:36:46.032 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:46.032 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:46.032 Initialization complete. Launching workers. 00:36:46.032 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28251 00:36:46.032 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28308, failed to submit 66 00:36:46.032 success 28251, unsuccessful 57, failed 0 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:46.032 rmmod nvme_tcp 00:36:46.032 rmmod nvme_fabrics 00:36:46.032 rmmod nvme_keyring 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 406766 ']' 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 406766 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 406766 ']' 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 406766 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 406766 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 406766' 00:36:46.032 killing process with pid 406766 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 406766 00:36:46.032 16:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 406766 00:36:46.032 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:46.032 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:46.032 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:46.032 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:46.032 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:46.032 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:46.032 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:46.033 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:46.033 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:46.033 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:46.033 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:46.033 16:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:47.942 00:36:47.942 real 0m7.495s 00:36:47.942 user 0m9.485s 00:36:47.942 sys 0m2.938s 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:47.942 ************************************ 00:36:47.942 END TEST nvmf_abort 00:36:47.942 ************************************ 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:47.942 ************************************ 00:36:47.942 START TEST nvmf_ns_hotplug_stress 00:36:47.942 ************************************ 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:47.942 * Looking for test storage... 00:36:47.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:36:47.942 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:48.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.201 --rc genhtml_branch_coverage=1 00:36:48.201 --rc genhtml_function_coverage=1 00:36:48.201 --rc genhtml_legend=1 00:36:48.201 --rc geninfo_all_blocks=1 00:36:48.201 --rc geninfo_unexecuted_blocks=1 00:36:48.201 00:36:48.201 ' 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:48.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.201 --rc genhtml_branch_coverage=1 00:36:48.201 --rc genhtml_function_coverage=1 00:36:48.201 --rc genhtml_legend=1 00:36:48.201 --rc geninfo_all_blocks=1 00:36:48.201 --rc geninfo_unexecuted_blocks=1 00:36:48.201 00:36:48.201 ' 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:48.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.201 --rc genhtml_branch_coverage=1 00:36:48.201 --rc genhtml_function_coverage=1 00:36:48.201 --rc genhtml_legend=1 00:36:48.201 --rc geninfo_all_blocks=1 00:36:48.201 --rc geninfo_unexecuted_blocks=1 00:36:48.201 00:36:48.201 ' 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:48.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:48.201 --rc genhtml_branch_coverage=1 00:36:48.201 --rc genhtml_function_coverage=1 00:36:48.201 --rc genhtml_legend=1 00:36:48.201 --rc geninfo_all_blocks=1 00:36:48.201 --rc geninfo_unexecuted_blocks=1 00:36:48.201 00:36:48.201 ' 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:48.201 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:48.202 16:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:50.105 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:50.105 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:50.105 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:50.105 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:50.105 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:50.106 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:50.364 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:50.364 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:36:50.364 00:36:50.364 --- 10.0.0.2 ping statistics --- 00:36:50.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.364 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:50.364 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:50.364 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:36:50.364 00:36:50.364 --- 10.0.0.1 ping statistics --- 00:36:50.364 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:50.364 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=409003 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 409003 00:36:50.364 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 409003 ']' 00:36:50.365 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:50.365 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:50.365 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:50.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:50.365 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:50.365 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:50.365 [2024-11-19 16:42:40.604313] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:50.365 [2024-11-19 16:42:40.605539] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:36:50.365 [2024-11-19 16:42:40.605595] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:50.365 [2024-11-19 16:42:40.684128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:50.623 [2024-11-19 16:42:40.730631] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:50.623 [2024-11-19 16:42:40.730684] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:50.623 [2024-11-19 16:42:40.730713] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:50.623 [2024-11-19 16:42:40.730731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:50.623 [2024-11-19 16:42:40.730741] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:50.623 [2024-11-19 16:42:40.732263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:50.623 [2024-11-19 16:42:40.732331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:50.623 [2024-11-19 16:42:40.732336] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.623 [2024-11-19 16:42:40.816258] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:50.623 [2024-11-19 16:42:40.816494] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:50.623 [2024-11-19 16:42:40.816505] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:50.623 [2024-11-19 16:42:40.816749] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:50.623 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:50.623 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:50.623 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:50.623 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:50.623 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:50.623 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:50.623 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:50.623 16:42:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:50.881 [2024-11-19 16:42:41.113015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:50.881 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:51.139 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:51.399 [2024-11-19 16:42:41.661326] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:51.399 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:51.657 16:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:52.222 Malloc0 00:36:52.222 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:52.222 Delay0 00:36:52.222 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:52.792 16:42:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:36:52.792 NULL1 00:36:53.050 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:36:53.309 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=409388 00:36:53.309 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:36:53.309 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:36:53.309 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:53.567 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:53.825 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:36:53.825 16:42:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:36:54.082 true 00:36:54.082 16:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:36:54.082 16:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:54.340 16:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:54.598 16:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:36:54.598 16:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:36:54.857 true 00:36:54.857 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:36:54.857 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:55.114 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:55.681 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:36:55.681 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:36:55.681 true 00:36:55.681 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:36:55.681 16:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:56.620 Read completed with error (sct=0, sc=11) 00:36:56.620 16:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:56.620 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:36:56.878 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:36:56.878 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:36:57.136 true 00:36:57.136 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:36:57.136 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:57.394 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:57.652 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:36:57.652 16:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:36:57.910 true 00:36:57.910 16:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:36:57.910 16:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:58.847 16:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.104 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:36:59.104 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:36:59.360 true 00:36:59.360 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:36:59.360 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:36:59.618 16:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:36:59.876 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:36:59.876 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:00.135 true 00:37:00.135 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:00.135 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:00.394 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:00.652 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:00.652 16:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:00.910 true 00:37:00.910 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:00.910 16:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:01.845 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:01.845 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.103 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:02.103 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:02.362 true 00:37:02.362 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:02.362 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.621 16:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.880 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:02.880 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:03.139 true 00:37:03.139 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:03.139 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.397 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:03.655 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:03.655 16:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:03.914 true 00:37:03.914 16:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:03.914 16:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:04.851 16:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.111 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.111 16:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:05.111 16:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:05.370 true 00:37:05.628 16:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:05.628 16:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.886 16:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:06.143 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:06.143 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:06.401 true 00:37:06.401 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:06.401 16:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.335 16:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:07.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.335 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:07.335 16:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:07.335 16:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:07.594 true 00:37:07.594 16:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:07.594 16:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.853 16:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:08.111 16:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:08.111 16:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:08.369 true 00:37:08.369 16:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:08.369 16:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:09.306 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:09.564 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:09.564 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:09.823 true 00:37:09.823 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:09.823 16:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.081 16:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:10.339 16:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:10.339 16:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:10.597 true 00:37:10.597 16:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:10.597 16:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.855 16:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:11.113 16:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:11.113 16:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:11.372 true 00:37:11.372 16:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:11.372 16:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.306 16:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.564 16:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:12.564 16:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:12.822 true 00:37:12.822 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:12.822 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:13.081 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:13.340 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:13.340 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:13.598 true 00:37:13.598 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:13.598 16:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:13.856 16:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.114 16:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:14.114 16:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:14.372 true 00:37:14.372 16:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:14.372 16:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.306 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:15.306 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.565 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:15.565 16:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:15.826 true 00:37:16.086 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:16.086 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.344 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:16.602 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:16.602 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:16.860 true 00:37:16.860 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:16.860 16:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.794 16:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:17.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:17.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:17.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:17.794 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:17.794 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:18.053 true 00:37:18.053 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:18.053 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.312 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:18.571 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:18.571 16:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:18.829 true 00:37:18.829 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:18.829 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:19.768 16:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.026 16:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:20.026 16:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:20.285 true 00:37:20.285 16:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:20.285 16:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.543 16:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.802 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:20.802 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:21.060 true 00:37:21.060 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:21.060 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:21.319 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:21.578 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:21.578 16:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:21.836 true 00:37:21.836 16:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:21.836 16:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.774 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.774 16:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:23.032 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:37:23.032 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:37:23.290 true 00:37:23.290 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:23.290 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:23.550 16:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:23.550 Initializing NVMe Controllers 00:37:23.550 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:23.550 Controller IO queue size 128, less than required. 00:37:23.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:23.550 Controller IO queue size 128, less than required. 00:37:23.550 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:23.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:23.550 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:23.550 Initialization complete. Launching workers. 00:37:23.550 ======================================================== 00:37:23.550 Latency(us) 00:37:23.550 Device Information : IOPS MiB/s Average min max 00:37:23.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 465.68 0.23 112216.17 3513.26 1014490.78 00:37:23.550 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8382.52 4.09 15224.91 2453.17 445737.98 00:37:23.550 ======================================================== 00:37:23.550 Total : 8848.20 4.32 20329.51 2453.17 1014490.78 00:37:23.550 00:37:23.809 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:37:23.809 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:37:24.067 true 00:37:24.067 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 409388 00:37:24.067 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (409388) - No such process 00:37:24.067 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 409388 00:37:24.067 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.325 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:24.584 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:24.584 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:24.584 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:24.584 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:24.584 16:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:24.842 null0 00:37:24.842 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:24.842 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:24.842 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:25.101 null1 00:37:25.101 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:25.101 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:25.101 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:25.359 null2 00:37:25.359 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:25.359 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:25.359 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:25.618 null3 00:37:25.618 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:25.618 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:25.618 16:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:25.878 null4 00:37:26.138 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:26.138 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:26.138 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:26.399 null5 00:37:26.399 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:26.399 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:26.399 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:26.659 null6 00:37:26.659 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:26.659 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:26.660 16:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:26.919 null7 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 413309 413310 413312 413313 413316 413318 413320 413322 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:26.919 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:27.179 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:27.179 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:27.179 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:27.179 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:27.179 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:27.179 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.179 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:27.179 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.438 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:27.697 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:27.697 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:27.697 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:27.697 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:27.697 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:27.697 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.697 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:27.697 16:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:27.955 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.956 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.956 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:27.956 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:27.956 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:27.956 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:28.214 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:28.214 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:28.214 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:28.214 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:28.214 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:28.214 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:28.472 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:28.472 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:28.731 16:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:28.990 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:28.990 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:28.990 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:28.990 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:28.990 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:28.990 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:28.990 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:28.990 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.249 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:29.507 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:29.507 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:29.507 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:29.507 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:29.507 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:29.507 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:29.507 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:29.507 16:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:29.766 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:30.024 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:30.024 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:30.024 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:30.024 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:30.024 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:30.024 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:30.025 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:30.025 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:30.592 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.592 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:30.593 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:30.852 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:30.852 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:30.852 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:30.852 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:30.852 16:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.110 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:31.369 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:31.369 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:31.369 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:31.369 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:31.369 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:31.369 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:31.369 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:31.369 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.628 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:31.629 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.629 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.629 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:31.889 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:31.889 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:31.889 16:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:31.889 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:31.889 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:31.889 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:31.889 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:31.889 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:31.889 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:31.889 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.148 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:32.407 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.407 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.407 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:32.407 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:32.407 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:32.407 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.407 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:32.407 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:32.407 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:32.666 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:32.666 16:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:32.924 rmmod nvme_tcp 00:37:32.924 rmmod nvme_fabrics 00:37:32.924 rmmod nvme_keyring 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 409003 ']' 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 409003 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 409003 ']' 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 409003 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 409003 00:37:32.924 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:32.925 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:32.925 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 409003' 00:37:32.925 killing process with pid 409003 00:37:32.925 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 409003 00:37:32.925 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 409003 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:33.184 16:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:35.721 00:37:35.721 real 0m47.298s 00:37:35.721 user 3m20.219s 00:37:35.721 sys 0m21.304s 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:35.721 ************************************ 00:37:35.721 END TEST nvmf_ns_hotplug_stress 00:37:35.721 ************************************ 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:35.721 ************************************ 00:37:35.721 START TEST nvmf_delete_subsystem 00:37:35.721 ************************************ 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:35.721 * Looking for test storage... 00:37:35.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.721 --rc genhtml_branch_coverage=1 00:37:35.721 --rc genhtml_function_coverage=1 00:37:35.721 --rc genhtml_legend=1 00:37:35.721 --rc geninfo_all_blocks=1 00:37:35.721 --rc geninfo_unexecuted_blocks=1 00:37:35.721 00:37:35.721 ' 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.721 --rc genhtml_branch_coverage=1 00:37:35.721 --rc genhtml_function_coverage=1 00:37:35.721 --rc genhtml_legend=1 00:37:35.721 --rc geninfo_all_blocks=1 00:37:35.721 --rc geninfo_unexecuted_blocks=1 00:37:35.721 00:37:35.721 ' 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.721 --rc genhtml_branch_coverage=1 00:37:35.721 --rc genhtml_function_coverage=1 00:37:35.721 --rc genhtml_legend=1 00:37:35.721 --rc geninfo_all_blocks=1 00:37:35.721 --rc geninfo_unexecuted_blocks=1 00:37:35.721 00:37:35.721 ' 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:35.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:35.721 --rc genhtml_branch_coverage=1 00:37:35.721 --rc genhtml_function_coverage=1 00:37:35.721 --rc genhtml_legend=1 00:37:35.721 --rc geninfo_all_blocks=1 00:37:35.721 --rc geninfo_unexecuted_blocks=1 00:37:35.721 00:37:35.721 ' 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:35.721 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:35.722 16:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:37.625 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:37.625 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.625 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:37.626 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:37.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:37.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:37.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:37:37.626 00:37:37.626 --- 10.0.0.2 ping statistics --- 00:37:37.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.626 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:37.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:37.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:37:37.626 00:37:37.626 --- 10.0.0.1 ping statistics --- 00:37:37.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:37.626 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=416184 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 416184 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 416184 ']' 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:37.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:37.626 16:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:37.885 [2024-11-19 16:43:27.988216] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:37.885 [2024-11-19 16:43:27.989314] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:37:37.885 [2024-11-19 16:43:27.989375] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:37.885 [2024-11-19 16:43:28.063366] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:37.885 [2024-11-19 16:43:28.111125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:37.885 [2024-11-19 16:43:28.111199] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:37.885 [2024-11-19 16:43:28.111227] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:37.885 [2024-11-19 16:43:28.111239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:37.885 [2024-11-19 16:43:28.111250] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:37.885 [2024-11-19 16:43:28.112824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:37.885 [2024-11-19 16:43:28.112829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:37.885 [2024-11-19 16:43:28.205445] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:37.885 [2024-11-19 16:43:28.205463] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:37.885 [2024-11-19 16:43:28.205715] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:38.144 [2024-11-19 16:43:28.257496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:38.144 [2024-11-19 16:43:28.273696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:38.144 NULL1 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:38.144 Delay0 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=416212 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:38.144 16:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:38.144 [2024-11-19 16:43:28.354552] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:40.042 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:40.042 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.042 16:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 [2024-11-19 16:43:30.435275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e30000c40 is same with the state(6) to be set 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 starting I/O failed: -6 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.300 Write completed with error (sct=0, sc=8) 00:37:40.300 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Write completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:40.301 starting I/O failed: -6 00:37:40.301 Read completed with error (sct=0, sc=8) 00:37:41.236 [2024-11-19 16:43:31.410826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf315b0 is same with the state(6) to be set 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 [2024-11-19 16:43:31.439116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf23b40 is same with the state(6) to be set 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Write completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.236 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 [2024-11-19 16:43:31.439345] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e3000d7e0 is same with the state(6) to be set 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 [2024-11-19 16:43:31.439544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f6e3000d020 is same with the state(6) to be set 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Write completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 Read completed with error (sct=0, sc=8) 00:37:41.237 [2024-11-19 16:43:31.439805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf233f0 is same with the state(6) to be set 00:37:41.237 Initializing NVMe Controllers 00:37:41.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:41.237 Controller IO queue size 128, less than required. 00:37:41.237 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:41.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:41.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:41.237 Initialization complete. Launching workers. 00:37:41.237 ======================================================== 00:37:41.237 Latency(us) 00:37:41.237 Device Information : IOPS MiB/s Average min max 00:37:41.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.02 0.09 897332.83 627.20 1012947.04 00:37:41.237 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164.70 0.08 906671.66 553.72 1013144.98 00:37:41.237 ======================================================== 00:37:41.237 Total : 352.72 0.17 901693.58 553.72 1013144.98 00:37:41.237 00:37:41.237 [2024-11-19 16:43:31.440826] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf315b0 (9): Bad file descriptor 00:37:41.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:41.237 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.237 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:41.237 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 416212 00:37:41.237 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 416212 00:37:41.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (416212) - No such process 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 416212 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 416212 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 416212 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:41.805 [2024-11-19 16:43:31.961710] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:41.805 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:41.806 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:41.806 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=416614 00:37:41.806 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:41.806 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416614 00:37:41.806 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:41.806 16:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:41.806 [2024-11-19 16:43:32.025483] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:42.371 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:42.371 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416614 00:37:42.371 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:43.015 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:43.015 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416614 00:37:43.015 16:43:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:43.322 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:43.322 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416614 00:37:43.322 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:43.933 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:43.933 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416614 00:37:43.933 16:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:44.190 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:44.190 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416614 00:37:44.190 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:44.755 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:44.755 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416614 00:37:44.755 16:43:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:45.014 Initializing NVMe Controllers 00:37:45.014 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:45.014 Controller IO queue size 128, less than required. 00:37:45.014 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:45.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:45.014 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:45.014 Initialization complete. Launching workers. 00:37:45.014 ======================================================== 00:37:45.014 Latency(us) 00:37:45.014 Device Information : IOPS MiB/s Average min max 00:37:45.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005436.74 1000249.00 1043297.30 00:37:45.014 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004936.61 1000185.92 1042146.26 00:37:45.014 ======================================================== 00:37:45.014 Total : 256.00 0.12 1005186.67 1000185.92 1043297.30 00:37:45.014 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 416614 00:37:45.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (416614) - No such process 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 416614 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:45.273 rmmod nvme_tcp 00:37:45.273 rmmod nvme_fabrics 00:37:45.273 rmmod nvme_keyring 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 416184 ']' 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 416184 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 416184 ']' 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 416184 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 416184 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 416184' 00:37:45.273 killing process with pid 416184 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 416184 00:37:45.273 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 416184 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:45.533 16:43:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:48.070 00:37:48.070 real 0m12.266s 00:37:48.070 user 0m24.699s 00:37:48.070 sys 0m3.507s 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:48.070 ************************************ 00:37:48.070 END TEST nvmf_delete_subsystem 00:37:48.070 ************************************ 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:48.070 ************************************ 00:37:48.070 START TEST nvmf_host_management 00:37:48.070 ************************************ 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:48.070 * Looking for test storage... 00:37:48.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:48.070 16:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:48.070 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:48.070 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:48.070 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:48.070 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:48.070 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:48.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.070 --rc genhtml_branch_coverage=1 00:37:48.070 --rc genhtml_function_coverage=1 00:37:48.070 --rc genhtml_legend=1 00:37:48.070 --rc geninfo_all_blocks=1 00:37:48.070 --rc geninfo_unexecuted_blocks=1 00:37:48.070 00:37:48.070 ' 00:37:48.070 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:48.070 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.070 --rc genhtml_branch_coverage=1 00:37:48.070 --rc genhtml_function_coverage=1 00:37:48.070 --rc genhtml_legend=1 00:37:48.071 --rc geninfo_all_blocks=1 00:37:48.071 --rc geninfo_unexecuted_blocks=1 00:37:48.071 00:37:48.071 ' 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:48.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.071 --rc genhtml_branch_coverage=1 00:37:48.071 --rc genhtml_function_coverage=1 00:37:48.071 --rc genhtml_legend=1 00:37:48.071 --rc geninfo_all_blocks=1 00:37:48.071 --rc geninfo_unexecuted_blocks=1 00:37:48.071 00:37:48.071 ' 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:48.071 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:48.071 --rc genhtml_branch_coverage=1 00:37:48.071 --rc genhtml_function_coverage=1 00:37:48.071 --rc genhtml_legend=1 00:37:48.071 --rc geninfo_all_blocks=1 00:37:48.071 --rc geninfo_unexecuted_blocks=1 00:37:48.071 00:37:48.071 ' 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:48.071 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:48.072 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:48.072 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:48.072 16:43:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:37:49.976 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:49.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:49.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:49.977 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:49.977 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:49.977 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:50.236 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:50.236 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:50.236 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:50.236 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:50.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:50.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:37:50.236 00:37:50.236 --- 10.0.0.2 ping statistics --- 00:37:50.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.236 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:37:50.236 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:50.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:50.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:37:50.236 00:37:50.236 --- 10.0.0.1 ping statistics --- 00:37:50.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:50.236 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:37:50.236 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:50.236 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:37:50.236 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:50.236 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:50.236 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=419070 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 419070 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 419070 ']' 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:50.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:50.237 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:50.237 [2024-11-19 16:43:40.406007] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:50.237 [2024-11-19 16:43:40.407107] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:37:50.237 [2024-11-19 16:43:40.407165] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:50.237 [2024-11-19 16:43:40.485605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:50.237 [2024-11-19 16:43:40.533863] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:50.237 [2024-11-19 16:43:40.533936] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:50.237 [2024-11-19 16:43:40.533962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:50.237 [2024-11-19 16:43:40.533973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:50.237 [2024-11-19 16:43:40.533982] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:50.237 [2024-11-19 16:43:40.535558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:50.237 [2024-11-19 16:43:40.535615] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:50.237 [2024-11-19 16:43:40.535721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:37:50.237 [2024-11-19 16:43:40.535731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:50.495 [2024-11-19 16:43:40.621334] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:50.495 [2024-11-19 16:43:40.621584] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:50.495 [2024-11-19 16:43:40.622481] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:50.495 [2024-11-19 16:43:40.622555] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:50.495 [2024-11-19 16:43:40.622819] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:37:50.495 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:50.496 [2024-11-19 16:43:40.676485] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:50.496 Malloc0 00:37:50.496 [2024-11-19 16:43:40.756711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=419120 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 419120 /var/tmp/bdevperf.sock 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 419120 ']' 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:37:50.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:50.496 { 00:37:50.496 "params": { 00:37:50.496 "name": "Nvme$subsystem", 00:37:50.496 "trtype": "$TEST_TRANSPORT", 00:37:50.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:50.496 "adrfam": "ipv4", 00:37:50.496 "trsvcid": "$NVMF_PORT", 00:37:50.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:50.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:50.496 "hdgst": ${hdgst:-false}, 00:37:50.496 "ddgst": ${ddgst:-false} 00:37:50.496 }, 00:37:50.496 "method": "bdev_nvme_attach_controller" 00:37:50.496 } 00:37:50.496 EOF 00:37:50.496 )") 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:50.496 16:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:50.496 "params": { 00:37:50.496 "name": "Nvme0", 00:37:50.496 "trtype": "tcp", 00:37:50.496 "traddr": "10.0.0.2", 00:37:50.496 "adrfam": "ipv4", 00:37:50.496 "trsvcid": "4420", 00:37:50.496 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:50.496 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:50.496 "hdgst": false, 00:37:50.496 "ddgst": false 00:37:50.496 }, 00:37:50.496 "method": "bdev_nvme_attach_controller" 00:37:50.496 }' 00:37:50.754 [2024-11-19 16:43:40.841531] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:37:50.754 [2024-11-19 16:43:40.841606] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419120 ] 00:37:50.754 [2024-11-19 16:43:40.911859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:50.754 [2024-11-19 16:43:40.958607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:51.013 Running I/O for 10 seconds... 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:37:51.013 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=579 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 579 -ge 100 ']' 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.273 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:51.273 [2024-11-19 16:43:41.520501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.273 [2024-11-19 16:43:41.520913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.520924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.520936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.520948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.520960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.520972] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.520984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.520996] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.521007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.521018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.521030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.521042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.521062] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.521083] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 [2024-11-19 16:43:41.521096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13406c0 is same with the state(6) to be set 00:37:51.274 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.274 [2024-11-19 16:43:41.524834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.524875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.524904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.524921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.524937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.524951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.524967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.524981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.524997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:37:51.274 [2024-11-19 16:43:41.525010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:51.274 [2024-11-19 16:43:41.525149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:51.274 [2024-11-19 16:43:41.525260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.274 [2024-11-19 16:43:41.525791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.274 [2024-11-19 16:43:41.525805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.525821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.525834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.525850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.525864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.525879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.525893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.525908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.525922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.525937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.525950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.525970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.525984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.525999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:89984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:37:51.275 [2024-11-19 16:43:41.526807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:37:51.275 [2024-11-19 16:43:41.526844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:37:51.275 [2024-11-19 16:43:41.528010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:37:51.275 task offset: 81920 on job bdev=Nvme0n1 fails 00:37:51.275 00:37:51.275 Latency(us) 00:37:51.275 [2024-11-19T15:43:41.614Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:51.275 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:51.275 Job: Nvme0n1 ended in about 0.40 seconds with error 00:37:51.275 Verification LBA range: start 0x0 length 0x400 00:37:51.275 Nvme0n1 : 0.40 1612.72 100.79 161.27 0.00 35024.87 2706.39 34369.99 00:37:51.275 [2024-11-19T15:43:41.614Z] =================================================================================================================== 00:37:51.275 [2024-11-19T15:43:41.614Z] Total : 1612.72 100.79 161.27 0.00 35024.87 2706.39 34369.99 00:37:51.275 [2024-11-19 16:43:41.529929] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:37:51.275 [2024-11-19 16:43:41.529970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbcd70 (9): Bad file descriptor 00:37:51.275 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:51.275 16:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:37:51.276 [2024-11-19 16:43:41.533593] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:37:52.209 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 419120 00:37:52.209 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (419120) - No such process 00:37:52.209 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:37:52.209 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:37:52.209 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:37:52.209 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:37:52.209 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:37:52.209 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:37:52.209 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:37:52.209 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:37:52.209 { 00:37:52.209 "params": { 00:37:52.209 "name": "Nvme$subsystem", 00:37:52.209 "trtype": "$TEST_TRANSPORT", 00:37:52.209 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:52.209 "adrfam": "ipv4", 00:37:52.209 "trsvcid": "$NVMF_PORT", 00:37:52.209 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:52.209 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:52.209 "hdgst": ${hdgst:-false}, 00:37:52.209 "ddgst": ${ddgst:-false} 00:37:52.209 }, 00:37:52.209 "method": "bdev_nvme_attach_controller" 00:37:52.209 } 00:37:52.209 EOF 00:37:52.209 )") 00:37:52.209 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:37:52.210 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:37:52.210 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:37:52.468 16:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:37:52.468 "params": { 00:37:52.468 "name": "Nvme0", 00:37:52.468 "trtype": "tcp", 00:37:52.468 "traddr": "10.0.0.2", 00:37:52.468 "adrfam": "ipv4", 00:37:52.468 "trsvcid": "4420", 00:37:52.468 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:52.468 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:52.468 "hdgst": false, 00:37:52.468 "ddgst": false 00:37:52.468 }, 00:37:52.468 "method": "bdev_nvme_attach_controller" 00:37:52.468 }' 00:37:52.468 [2024-11-19 16:43:42.582809] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:37:52.468 [2024-11-19 16:43:42.582878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid419394 ] 00:37:52.468 [2024-11-19 16:43:42.652954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:52.468 [2024-11-19 16:43:42.701427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:52.726 Running I/O for 1 seconds... 00:37:53.935 1536.00 IOPS, 96.00 MiB/s 00:37:53.935 Latency(us) 00:37:53.935 [2024-11-19T15:43:44.274Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:53.935 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:37:53.935 Verification LBA range: start 0x0 length 0x400 00:37:53.935 Nvme0n1 : 1.01 1579.03 98.69 0.00 0.00 39890.04 6068.15 36894.34 00:37:53.935 [2024-11-19T15:43:44.274Z] =================================================================================================================== 00:37:53.935 [2024-11-19T15:43:44.274Z] Total : 1579.03 98.69 0.00 0.00 39890.04 6068.15 36894.34 00:37:53.935 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:53.936 rmmod nvme_tcp 00:37:53.936 rmmod nvme_fabrics 00:37:53.936 rmmod nvme_keyring 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 419070 ']' 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 419070 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 419070 ']' 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 419070 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:53.936 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 419070 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 419070' 00:37:54.194 killing process with pid 419070 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 419070 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 419070 00:37:54.194 [2024-11-19 16:43:44.470339] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:54.194 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:54.195 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:54.195 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:54.195 16:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:37:56.739 00:37:56.739 real 0m8.684s 00:37:56.739 user 0m16.680s 00:37:56.739 sys 0m3.770s 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:37:56.739 ************************************ 00:37:56.739 END TEST nvmf_host_management 00:37:56.739 ************************************ 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:56.739 ************************************ 00:37:56.739 START TEST nvmf_lvol 00:37:56.739 ************************************ 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:37:56.739 * Looking for test storage... 00:37:56.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:56.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.739 --rc genhtml_branch_coverage=1 00:37:56.739 --rc genhtml_function_coverage=1 00:37:56.739 --rc genhtml_legend=1 00:37:56.739 --rc geninfo_all_blocks=1 00:37:56.739 --rc geninfo_unexecuted_blocks=1 00:37:56.739 00:37:56.739 ' 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:56.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.739 --rc genhtml_branch_coverage=1 00:37:56.739 --rc genhtml_function_coverage=1 00:37:56.739 --rc genhtml_legend=1 00:37:56.739 --rc geninfo_all_blocks=1 00:37:56.739 --rc geninfo_unexecuted_blocks=1 00:37:56.739 00:37:56.739 ' 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:56.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.739 --rc genhtml_branch_coverage=1 00:37:56.739 --rc genhtml_function_coverage=1 00:37:56.739 --rc genhtml_legend=1 00:37:56.739 --rc geninfo_all_blocks=1 00:37:56.739 --rc geninfo_unexecuted_blocks=1 00:37:56.739 00:37:56.739 ' 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:56.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.739 --rc genhtml_branch_coverage=1 00:37:56.739 --rc genhtml_function_coverage=1 00:37:56.739 --rc genhtml_legend=1 00:37:56.739 --rc geninfo_all_blocks=1 00:37:56.739 --rc geninfo_unexecuted_blocks=1 00:37:56.739 00:37:56.739 ' 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.739 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:37:56.740 16:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:58.644 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:37:58.645 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:37:58.645 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:37:58.645 Found net devices under 0000:0a:00.0: cvl_0_0 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:37:58.645 Found net devices under 0000:0a:00.1: cvl_0_1 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:58.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:58.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:37:58.645 00:37:58.645 --- 10.0.0.2 ping statistics --- 00:37:58.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.645 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:37:58.645 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:58.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:58.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:37:58.645 00:37:58.645 --- 10.0.0.1 ping statistics --- 00:37:58.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:58.646 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=421470 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 421470 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 421470 ']' 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:58.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:58.646 16:43:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:58.905 [2024-11-19 16:43:49.020045] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:58.905 [2024-11-19 16:43:49.021065] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:37:58.905 [2024-11-19 16:43:49.021137] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:58.905 [2024-11-19 16:43:49.089389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:58.905 [2024-11-19 16:43:49.132238] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:58.905 [2024-11-19 16:43:49.132296] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:58.905 [2024-11-19 16:43:49.132325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:58.905 [2024-11-19 16:43:49.132336] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:58.905 [2024-11-19 16:43:49.132345] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:58.905 [2024-11-19 16:43:49.133811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:58.905 [2024-11-19 16:43:49.133885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:58.905 [2024-11-19 16:43:49.133890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:58.905 [2024-11-19 16:43:49.221325] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:58.905 [2024-11-19 16:43:49.221534] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:58.905 [2024-11-19 16:43:49.221552] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:58.905 [2024-11-19 16:43:49.221805] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:59.163 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:59.163 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:37:59.163 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:59.163 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:59.163 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:37:59.163 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:59.163 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:59.420 [2024-11-19 16:43:49.526605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:59.420 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:59.678 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:37:59.678 16:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:37:59.936 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:37:59.936 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:00.194 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:00.451 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=eac1e04a-bc42-41be-bddb-0b57bcb3fd7c 00:38:00.451 16:43:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u eac1e04a-bc42-41be-bddb-0b57bcb3fd7c lvol 20 00:38:00.710 16:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=033df1ce-0521-44a5-ad63-43c1ef7dc4df 00:38:00.710 16:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:00.968 16:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 033df1ce-0521-44a5-ad63-43c1ef7dc4df 00:38:01.226 16:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:01.484 [2024-11-19 16:43:51.798777] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:01.484 16:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:02.051 16:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=421890 00:38:02.052 16:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:02.052 16:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:02.986 16:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 033df1ce-0521-44a5-ad63-43c1ef7dc4df MY_SNAPSHOT 00:38:03.244 16:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=30000412-74bf-4023-86c6-4911a4f7ed74 00:38:03.244 16:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 033df1ce-0521-44a5-ad63-43c1ef7dc4df 30 00:38:03.503 16:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 30000412-74bf-4023-86c6-4911a4f7ed74 MY_CLONE 00:38:03.761 16:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=f3f9d82d-144e-4380-bcc6-44c7d1420610 00:38:03.761 16:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate f3f9d82d-144e-4380-bcc6-44c7d1420610 00:38:04.327 16:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 421890 00:38:12.438 Initializing NVMe Controllers 00:38:12.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:12.438 Controller IO queue size 128, less than required. 00:38:12.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:12.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:12.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:12.438 Initialization complete. Launching workers. 00:38:12.438 ======================================================== 00:38:12.438 Latency(us) 00:38:12.438 Device Information : IOPS MiB/s Average min max 00:38:12.438 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10482.30 40.95 12215.20 4279.33 58983.14 00:38:12.438 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10503.70 41.03 12191.59 4957.41 80353.27 00:38:12.438 ======================================================== 00:38:12.438 Total : 20986.00 81.98 12203.38 4279.33 80353.27 00:38:12.438 00:38:12.438 16:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:12.438 16:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 033df1ce-0521-44a5-ad63-43c1ef7dc4df 00:38:12.697 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u eac1e04a-bc42-41be-bddb-0b57bcb3fd7c 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:13.263 rmmod nvme_tcp 00:38:13.263 rmmod nvme_fabrics 00:38:13.263 rmmod nvme_keyring 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 421470 ']' 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 421470 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 421470 ']' 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 421470 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 421470 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 421470' 00:38:13.263 killing process with pid 421470 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 421470 00:38:13.263 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 421470 00:38:13.523 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:13.524 16:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:15.430 00:38:15.430 real 0m19.072s 00:38:15.430 user 0m55.589s 00:38:15.430 sys 0m8.045s 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:15.430 ************************************ 00:38:15.430 END TEST nvmf_lvol 00:38:15.430 ************************************ 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:15.430 ************************************ 00:38:15.430 START TEST nvmf_lvs_grow 00:38:15.430 ************************************ 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:15.430 * Looking for test storage... 00:38:15.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:38:15.430 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:15.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.690 --rc genhtml_branch_coverage=1 00:38:15.690 --rc genhtml_function_coverage=1 00:38:15.690 --rc genhtml_legend=1 00:38:15.690 --rc geninfo_all_blocks=1 00:38:15.690 --rc geninfo_unexecuted_blocks=1 00:38:15.690 00:38:15.690 ' 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:15.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.690 --rc genhtml_branch_coverage=1 00:38:15.690 --rc genhtml_function_coverage=1 00:38:15.690 --rc genhtml_legend=1 00:38:15.690 --rc geninfo_all_blocks=1 00:38:15.690 --rc geninfo_unexecuted_blocks=1 00:38:15.690 00:38:15.690 ' 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:15.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.690 --rc genhtml_branch_coverage=1 00:38:15.690 --rc genhtml_function_coverage=1 00:38:15.690 --rc genhtml_legend=1 00:38:15.690 --rc geninfo_all_blocks=1 00:38:15.690 --rc geninfo_unexecuted_blocks=1 00:38:15.690 00:38:15.690 ' 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:15.690 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.690 --rc genhtml_branch_coverage=1 00:38:15.690 --rc genhtml_function_coverage=1 00:38:15.690 --rc genhtml_legend=1 00:38:15.690 --rc geninfo_all_blocks=1 00:38:15.690 --rc geninfo_unexecuted_blocks=1 00:38:15.690 00:38:15.690 ' 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:15.690 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:15.691 16:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:38:18.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:38:18.238 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:38:18.238 Found net devices under 0000:0a:00.0: cvl_0_0 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:18.238 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:38:18.239 Found net devices under 0000:0a:00.1: cvl_0_1 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:18.239 16:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:18.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:18.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:38:18.239 00:38:18.239 --- 10.0.0.2 ping statistics --- 00:38:18.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:18.239 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:18.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:18.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:38:18.239 00:38:18.239 --- 10.0.0.1 ping statistics --- 00:38:18.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:18.239 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=425143 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 425143 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 425143 ']' 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:18.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:18.239 [2024-11-19 16:44:08.273946] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:18.239 [2024-11-19 16:44:08.274975] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:38:18.239 [2024-11-19 16:44:08.275029] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:18.239 [2024-11-19 16:44:08.345238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:18.239 [2024-11-19 16:44:08.389812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:18.239 [2024-11-19 16:44:08.389880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:18.239 [2024-11-19 16:44:08.389902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:18.239 [2024-11-19 16:44:08.389913] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:18.239 [2024-11-19 16:44:08.389923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:18.239 [2024-11-19 16:44:08.390540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:18.239 [2024-11-19 16:44:08.474458] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:18.239 [2024-11-19 16:44:08.474776] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:18.239 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:18.498 [2024-11-19 16:44:08.775117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:18.498 ************************************ 00:38:18.498 START TEST lvs_grow_clean 00:38:18.498 ************************************ 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:18.498 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:18.757 16:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:19.015 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:19.016 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:19.275 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:19.275 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:19.275 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:19.534 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:19.534 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:19.534 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 62d18d82-6f76-408e-b43b-770c93cd5d77 lvol 150 00:38:19.793 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a51f1a47-b642-4201-9748-9beb18edbe65 00:38:19.793 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:19.793 16:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:20.052 [2024-11-19 16:44:10.214994] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:20.052 [2024-11-19 16:44:10.215127] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:20.052 true 00:38:20.052 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:20.052 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:20.310 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:20.310 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:20.568 16:44:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a51f1a47-b642-4201-9748-9beb18edbe65 00:38:20.825 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:21.083 [2024-11-19 16:44:11.331353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:21.083 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:21.341 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=425573 00:38:21.341 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:21.341 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:21.341 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 425573 /var/tmp/bdevperf.sock 00:38:21.341 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 425573 ']' 00:38:21.341 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:21.341 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:21.341 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:21.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:21.341 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:21.341 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:21.341 [2024-11-19 16:44:11.666934] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:38:21.341 [2024-11-19 16:44:11.667016] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425573 ] 00:38:21.598 [2024-11-19 16:44:11.734457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.598 [2024-11-19 16:44:11.781959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:21.598 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:21.598 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:21.598 16:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:22.165 Nvme0n1 00:38:22.165 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:22.423 [ 00:38:22.423 { 00:38:22.423 "name": "Nvme0n1", 00:38:22.423 "aliases": [ 00:38:22.423 "a51f1a47-b642-4201-9748-9beb18edbe65" 00:38:22.423 ], 00:38:22.423 "product_name": "NVMe disk", 00:38:22.423 "block_size": 4096, 00:38:22.423 "num_blocks": 38912, 00:38:22.423 "uuid": "a51f1a47-b642-4201-9748-9beb18edbe65", 00:38:22.423 "numa_id": 0, 00:38:22.423 "assigned_rate_limits": { 00:38:22.423 "rw_ios_per_sec": 0, 00:38:22.423 "rw_mbytes_per_sec": 0, 00:38:22.423 "r_mbytes_per_sec": 0, 00:38:22.423 "w_mbytes_per_sec": 0 00:38:22.423 }, 00:38:22.423 "claimed": false, 00:38:22.423 "zoned": false, 00:38:22.424 "supported_io_types": { 00:38:22.424 "read": true, 00:38:22.424 "write": true, 00:38:22.424 "unmap": true, 00:38:22.424 "flush": true, 00:38:22.424 "reset": true, 00:38:22.424 "nvme_admin": true, 00:38:22.424 "nvme_io": true, 00:38:22.424 "nvme_io_md": false, 00:38:22.424 "write_zeroes": true, 00:38:22.424 "zcopy": false, 00:38:22.424 "get_zone_info": false, 00:38:22.424 "zone_management": false, 00:38:22.424 "zone_append": false, 00:38:22.424 "compare": true, 00:38:22.424 "compare_and_write": true, 00:38:22.424 "abort": true, 00:38:22.424 "seek_hole": false, 00:38:22.424 "seek_data": false, 00:38:22.424 "copy": true, 00:38:22.424 "nvme_iov_md": false 00:38:22.424 }, 00:38:22.424 "memory_domains": [ 00:38:22.424 { 00:38:22.424 "dma_device_id": "system", 00:38:22.424 "dma_device_type": 1 00:38:22.424 } 00:38:22.424 ], 00:38:22.424 "driver_specific": { 00:38:22.424 "nvme": [ 00:38:22.424 { 00:38:22.424 "trid": { 00:38:22.424 "trtype": "TCP", 00:38:22.424 "adrfam": "IPv4", 00:38:22.424 "traddr": "10.0.0.2", 00:38:22.424 "trsvcid": "4420", 00:38:22.424 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:22.424 }, 00:38:22.424 "ctrlr_data": { 00:38:22.424 "cntlid": 1, 00:38:22.424 "vendor_id": "0x8086", 00:38:22.424 "model_number": "SPDK bdev Controller", 00:38:22.424 "serial_number": "SPDK0", 00:38:22.424 "firmware_revision": "25.01", 00:38:22.424 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:22.424 "oacs": { 00:38:22.424 "security": 0, 00:38:22.424 "format": 0, 00:38:22.424 "firmware": 0, 00:38:22.424 "ns_manage": 0 00:38:22.424 }, 00:38:22.424 "multi_ctrlr": true, 00:38:22.424 "ana_reporting": false 00:38:22.424 }, 00:38:22.424 "vs": { 00:38:22.424 "nvme_version": "1.3" 00:38:22.424 }, 00:38:22.424 "ns_data": { 00:38:22.424 "id": 1, 00:38:22.424 "can_share": true 00:38:22.424 } 00:38:22.424 } 00:38:22.424 ], 00:38:22.424 "mp_policy": "active_passive" 00:38:22.424 } 00:38:22.424 } 00:38:22.424 ] 00:38:22.424 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=425706 00:38:22.424 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:22.424 16:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:22.682 Running I/O for 10 seconds... 00:38:23.616 Latency(us) 00:38:23.616 [2024-11-19T15:44:13.955Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.616 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:23.616 Nvme0n1 : 1.00 15020.00 58.67 0.00 0.00 0.00 0.00 0.00 00:38:23.616 [2024-11-19T15:44:13.955Z] =================================================================================================================== 00:38:23.616 [2024-11-19T15:44:13.955Z] Total : 15020.00 58.67 0.00 0.00 0.00 0.00 0.00 00:38:23.616 00:38:24.551 16:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:24.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:24.551 Nvme0n1 : 2.00 15193.50 59.35 0.00 0.00 0.00 0.00 0.00 00:38:24.551 [2024-11-19T15:44:14.890Z] =================================================================================================================== 00:38:24.551 [2024-11-19T15:44:14.890Z] Total : 15193.50 59.35 0.00 0.00 0.00 0.00 0.00 00:38:24.551 00:38:24.810 true 00:38:24.810 16:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:24.810 16:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:25.069 16:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:25.069 16:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:25.069 16:44:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 425706 00:38:25.634 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:25.635 Nvme0n1 : 3.00 15251.33 59.58 0.00 0.00 0.00 0.00 0.00 00:38:25.635 [2024-11-19T15:44:15.974Z] =================================================================================================================== 00:38:25.635 [2024-11-19T15:44:15.974Z] Total : 15251.33 59.58 0.00 0.00 0.00 0.00 0.00 00:38:25.635 00:38:26.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:26.569 Nvme0n1 : 4.00 15343.75 59.94 0.00 0.00 0.00 0.00 0.00 00:38:26.569 [2024-11-19T15:44:16.908Z] =================================================================================================================== 00:38:26.569 [2024-11-19T15:44:16.908Z] Total : 15343.75 59.94 0.00 0.00 0.00 0.00 0.00 00:38:26.569 00:38:27.504 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:27.504 Nvme0n1 : 5.00 15406.00 60.18 0.00 0.00 0.00 0.00 0.00 00:38:27.504 [2024-11-19T15:44:17.843Z] =================================================================================================================== 00:38:27.504 [2024-11-19T15:44:17.843Z] Total : 15406.00 60.18 0.00 0.00 0.00 0.00 0.00 00:38:27.504 00:38:28.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:28.878 Nvme0n1 : 6.00 15405.17 60.18 0.00 0.00 0.00 0.00 0.00 00:38:28.878 [2024-11-19T15:44:19.217Z] =================================================================================================================== 00:38:28.878 [2024-11-19T15:44:19.217Z] Total : 15405.17 60.18 0.00 0.00 0.00 0.00 0.00 00:38:28.878 00:38:29.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:29.811 Nvme0n1 : 7.00 15436.00 60.30 0.00 0.00 0.00 0.00 0.00 00:38:29.811 [2024-11-19T15:44:20.151Z] =================================================================================================================== 00:38:29.812 [2024-11-19T15:44:20.151Z] Total : 15436.00 60.30 0.00 0.00 0.00 0.00 0.00 00:38:29.812 00:38:30.747 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:30.747 Nvme0n1 : 8.00 15443.25 60.33 0.00 0.00 0.00 0.00 0.00 00:38:30.747 [2024-11-19T15:44:21.086Z] =================================================================================================================== 00:38:30.747 [2024-11-19T15:44:21.086Z] Total : 15443.25 60.33 0.00 0.00 0.00 0.00 0.00 00:38:30.747 00:38:31.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:31.682 Nvme0n1 : 9.00 15480.89 60.47 0.00 0.00 0.00 0.00 0.00 00:38:31.682 [2024-11-19T15:44:22.021Z] =================================================================================================================== 00:38:31.682 [2024-11-19T15:44:22.021Z] Total : 15480.89 60.47 0.00 0.00 0.00 0.00 0.00 00:38:31.682 00:38:32.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:32.617 Nvme0n1 : 10.00 15520.30 60.63 0.00 0.00 0.00 0.00 0.00 00:38:32.617 [2024-11-19T15:44:22.956Z] =================================================================================================================== 00:38:32.617 [2024-11-19T15:44:22.956Z] Total : 15520.30 60.63 0.00 0.00 0.00 0.00 0.00 00:38:32.617 00:38:32.617 00:38:32.617 Latency(us) 00:38:32.617 [2024-11-19T15:44:22.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:32.617 Nvme0n1 : 10.01 15519.58 60.62 0.00 0.00 8243.07 4320.52 18350.08 00:38:32.617 [2024-11-19T15:44:22.956Z] =================================================================================================================== 00:38:32.617 [2024-11-19T15:44:22.956Z] Total : 15519.58 60.62 0.00 0.00 8243.07 4320.52 18350.08 00:38:32.617 { 00:38:32.617 "results": [ 00:38:32.617 { 00:38:32.617 "job": "Nvme0n1", 00:38:32.617 "core_mask": "0x2", 00:38:32.617 "workload": "randwrite", 00:38:32.617 "status": "finished", 00:38:32.617 "queue_depth": 128, 00:38:32.617 "io_size": 4096, 00:38:32.617 "runtime": 10.008714, 00:38:32.617 "iops": 15519.576241263363, 00:38:32.617 "mibps": 60.62334469243501, 00:38:32.617 "io_failed": 0, 00:38:32.617 "io_timeout": 0, 00:38:32.617 "avg_latency_us": 8243.070484272892, 00:38:32.617 "min_latency_us": 4320.521481481482, 00:38:32.617 "max_latency_us": 18350.08 00:38:32.617 } 00:38:32.617 ], 00:38:32.617 "core_count": 1 00:38:32.617 } 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 425573 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 425573 ']' 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 425573 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425573 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425573' 00:38:32.617 killing process with pid 425573 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 425573 00:38:32.617 Received shutdown signal, test time was about 10.000000 seconds 00:38:32.617 00:38:32.617 Latency(us) 00:38:32.617 [2024-11-19T15:44:22.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:32.617 [2024-11-19T15:44:22.956Z] =================================================================================================================== 00:38:32.617 [2024-11-19T15:44:22.956Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:32.617 16:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 425573 00:38:32.875 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:33.133 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:33.392 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:33.392 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:33.650 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:33.650 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:33.651 16:44:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:33.909 [2024-11-19 16:44:24.151060] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:33.909 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:34.168 request: 00:38:34.168 { 00:38:34.168 "uuid": "62d18d82-6f76-408e-b43b-770c93cd5d77", 00:38:34.168 "method": "bdev_lvol_get_lvstores", 00:38:34.168 "req_id": 1 00:38:34.168 } 00:38:34.168 Got JSON-RPC error response 00:38:34.168 response: 00:38:34.168 { 00:38:34.168 "code": -19, 00:38:34.168 "message": "No such device" 00:38:34.168 } 00:38:34.168 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:34.168 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:34.168 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:34.168 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:34.168 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:34.426 aio_bdev 00:38:34.426 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a51f1a47-b642-4201-9748-9beb18edbe65 00:38:34.426 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a51f1a47-b642-4201-9748-9beb18edbe65 00:38:34.426 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:34.426 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:34.426 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:34.426 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:34.426 16:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:34.993 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a51f1a47-b642-4201-9748-9beb18edbe65 -t 2000 00:38:34.993 [ 00:38:34.993 { 00:38:34.993 "name": "a51f1a47-b642-4201-9748-9beb18edbe65", 00:38:34.993 "aliases": [ 00:38:34.993 "lvs/lvol" 00:38:34.993 ], 00:38:34.993 "product_name": "Logical Volume", 00:38:34.993 "block_size": 4096, 00:38:34.993 "num_blocks": 38912, 00:38:34.993 "uuid": "a51f1a47-b642-4201-9748-9beb18edbe65", 00:38:34.993 "assigned_rate_limits": { 00:38:34.993 "rw_ios_per_sec": 0, 00:38:34.993 "rw_mbytes_per_sec": 0, 00:38:34.993 "r_mbytes_per_sec": 0, 00:38:34.993 "w_mbytes_per_sec": 0 00:38:34.993 }, 00:38:34.993 "claimed": false, 00:38:34.993 "zoned": false, 00:38:34.993 "supported_io_types": { 00:38:34.993 "read": true, 00:38:34.993 "write": true, 00:38:34.993 "unmap": true, 00:38:34.993 "flush": false, 00:38:34.993 "reset": true, 00:38:34.993 "nvme_admin": false, 00:38:34.993 "nvme_io": false, 00:38:34.993 "nvme_io_md": false, 00:38:34.993 "write_zeroes": true, 00:38:34.993 "zcopy": false, 00:38:34.993 "get_zone_info": false, 00:38:34.993 "zone_management": false, 00:38:34.993 "zone_append": false, 00:38:34.993 "compare": false, 00:38:34.993 "compare_and_write": false, 00:38:34.993 "abort": false, 00:38:34.993 "seek_hole": true, 00:38:34.993 "seek_data": true, 00:38:34.993 "copy": false, 00:38:34.993 "nvme_iov_md": false 00:38:34.993 }, 00:38:34.993 "driver_specific": { 00:38:34.993 "lvol": { 00:38:34.993 "lvol_store_uuid": "62d18d82-6f76-408e-b43b-770c93cd5d77", 00:38:34.993 "base_bdev": "aio_bdev", 00:38:34.993 "thin_provision": false, 00:38:34.994 "num_allocated_clusters": 38, 00:38:34.994 "snapshot": false, 00:38:34.994 "clone": false, 00:38:34.994 "esnap_clone": false 00:38:34.994 } 00:38:34.994 } 00:38:34.994 } 00:38:34.994 ] 00:38:34.994 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:34.994 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:34.994 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:35.561 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:35.561 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:35.561 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:35.561 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:35.561 16:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a51f1a47-b642-4201-9748-9beb18edbe65 00:38:35.820 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 62d18d82-6f76-408e-b43b-770c93cd5d77 00:38:36.388 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:36.388 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:36.647 00:38:36.647 real 0m17.905s 00:38:36.647 user 0m17.241s 00:38:36.647 sys 0m1.967s 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:36.647 ************************************ 00:38:36.647 END TEST lvs_grow_clean 00:38:36.647 ************************************ 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:36.647 ************************************ 00:38:36.647 START TEST lvs_grow_dirty 00:38:36.647 ************************************ 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:36.647 16:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:36.906 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:36.906 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:37.164 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=a9325919-7432-45da-bdbe-5752cd80db6f 00:38:37.164 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:37.164 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:37.422 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:37.422 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:37.422 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a9325919-7432-45da-bdbe-5752cd80db6f lvol 150 00:38:37.680 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=fa427f4d-618a-4789-9163-1d5b13c206af 00:38:37.680 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:37.680 16:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:37.981 [2024-11-19 16:44:28.239029] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:37.981 [2024-11-19 16:44:28.239164] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:37.981 true 00:38:37.981 16:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:37.981 16:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:38.299 16:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:38.299 16:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:38.603 16:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 fa427f4d-618a-4789-9163-1d5b13c206af 00:38:38.883 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:39.141 [2024-11-19 16:44:29.347345] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:39.141 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:39.399 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=427739 00:38:39.399 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:39.399 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:39.399 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 427739 /var/tmp/bdevperf.sock 00:38:39.399 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 427739 ']' 00:38:39.399 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:39.399 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:39.399 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:39.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:39.399 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:39.399 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:39.399 [2024-11-19 16:44:29.678114] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:38:39.399 [2024-11-19 16:44:29.678196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427739 ] 00:38:39.657 [2024-11-19 16:44:29.745986] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:39.657 [2024-11-19 16:44:29.794326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:39.657 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:39.657 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:39.657 16:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:40.224 Nvme0n1 00:38:40.224 16:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:40.224 [ 00:38:40.224 { 00:38:40.224 "name": "Nvme0n1", 00:38:40.224 "aliases": [ 00:38:40.224 "fa427f4d-618a-4789-9163-1d5b13c206af" 00:38:40.224 ], 00:38:40.224 "product_name": "NVMe disk", 00:38:40.224 "block_size": 4096, 00:38:40.224 "num_blocks": 38912, 00:38:40.224 "uuid": "fa427f4d-618a-4789-9163-1d5b13c206af", 00:38:40.224 "numa_id": 0, 00:38:40.224 "assigned_rate_limits": { 00:38:40.224 "rw_ios_per_sec": 0, 00:38:40.224 "rw_mbytes_per_sec": 0, 00:38:40.224 "r_mbytes_per_sec": 0, 00:38:40.224 "w_mbytes_per_sec": 0 00:38:40.224 }, 00:38:40.224 "claimed": false, 00:38:40.224 "zoned": false, 00:38:40.224 "supported_io_types": { 00:38:40.224 "read": true, 00:38:40.224 "write": true, 00:38:40.224 "unmap": true, 00:38:40.224 "flush": true, 00:38:40.224 "reset": true, 00:38:40.224 "nvme_admin": true, 00:38:40.224 "nvme_io": true, 00:38:40.224 "nvme_io_md": false, 00:38:40.224 "write_zeroes": true, 00:38:40.224 "zcopy": false, 00:38:40.224 "get_zone_info": false, 00:38:40.224 "zone_management": false, 00:38:40.224 "zone_append": false, 00:38:40.224 "compare": true, 00:38:40.224 "compare_and_write": true, 00:38:40.224 "abort": true, 00:38:40.224 "seek_hole": false, 00:38:40.224 "seek_data": false, 00:38:40.224 "copy": true, 00:38:40.224 "nvme_iov_md": false 00:38:40.224 }, 00:38:40.224 "memory_domains": [ 00:38:40.224 { 00:38:40.224 "dma_device_id": "system", 00:38:40.224 "dma_device_type": 1 00:38:40.224 } 00:38:40.224 ], 00:38:40.224 "driver_specific": { 00:38:40.224 "nvme": [ 00:38:40.224 { 00:38:40.224 "trid": { 00:38:40.224 "trtype": "TCP", 00:38:40.224 "adrfam": "IPv4", 00:38:40.224 "traddr": "10.0.0.2", 00:38:40.224 "trsvcid": "4420", 00:38:40.224 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:40.224 }, 00:38:40.224 "ctrlr_data": { 00:38:40.224 "cntlid": 1, 00:38:40.224 "vendor_id": "0x8086", 00:38:40.224 "model_number": "SPDK bdev Controller", 00:38:40.224 "serial_number": "SPDK0", 00:38:40.224 "firmware_revision": "25.01", 00:38:40.224 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:40.224 "oacs": { 00:38:40.224 "security": 0, 00:38:40.224 "format": 0, 00:38:40.224 "firmware": 0, 00:38:40.224 "ns_manage": 0 00:38:40.224 }, 00:38:40.224 "multi_ctrlr": true, 00:38:40.224 "ana_reporting": false 00:38:40.224 }, 00:38:40.224 "vs": { 00:38:40.224 "nvme_version": "1.3" 00:38:40.224 }, 00:38:40.224 "ns_data": { 00:38:40.224 "id": 1, 00:38:40.224 "can_share": true 00:38:40.224 } 00:38:40.224 } 00:38:40.224 ], 00:38:40.224 "mp_policy": "active_passive" 00:38:40.224 } 00:38:40.224 } 00:38:40.224 ] 00:38:40.224 16:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=427869 00:38:40.224 16:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:40.224 16:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:40.482 Running I/O for 10 seconds... 00:38:41.415 Latency(us) 00:38:41.415 [2024-11-19T15:44:31.754Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:41.415 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:41.415 Nvme0n1 : 1.00 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:38:41.415 [2024-11-19T15:44:31.754Z] =================================================================================================================== 00:38:41.416 [2024-11-19T15:44:31.755Z] Total : 15113.00 59.04 0.00 0.00 0.00 0.00 0.00 00:38:41.416 00:38:42.348 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:42.348 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:42.348 Nvme0n1 : 2.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:38:42.348 [2024-11-19T15:44:32.687Z] =================================================================================================================== 00:38:42.348 [2024-11-19T15:44:32.687Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:38:42.348 00:38:42.607 true 00:38:42.607 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:42.607 16:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:42.864 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:42.865 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:42.865 16:44:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 427869 00:38:43.432 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:43.432 Nvme0n1 : 3.00 15293.67 59.74 0.00 0.00 0.00 0.00 0.00 00:38:43.432 [2024-11-19T15:44:33.771Z] =================================================================================================================== 00:38:43.432 [2024-11-19T15:44:33.771Z] Total : 15293.67 59.74 0.00 0.00 0.00 0.00 0.00 00:38:43.432 00:38:44.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:44.391 Nvme0n1 : 4.00 15320.50 59.85 0.00 0.00 0.00 0.00 0.00 00:38:44.391 [2024-11-19T15:44:34.730Z] =================================================================================================================== 00:38:44.391 [2024-11-19T15:44:34.730Z] Total : 15320.50 59.85 0.00 0.00 0.00 0.00 0.00 00:38:44.391 00:38:45.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:45.765 Nvme0n1 : 5.00 15362.00 60.01 0.00 0.00 0.00 0.00 0.00 00:38:45.765 [2024-11-19T15:44:36.104Z] =================================================================================================================== 00:38:45.765 [2024-11-19T15:44:36.104Z] Total : 15362.00 60.01 0.00 0.00 0.00 0.00 0.00 00:38:45.765 00:38:46.699 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:46.699 Nvme0n1 : 6.00 15447.50 60.34 0.00 0.00 0.00 0.00 0.00 00:38:46.699 [2024-11-19T15:44:37.038Z] =================================================================================================================== 00:38:46.699 [2024-11-19T15:44:37.038Z] Total : 15447.50 60.34 0.00 0.00 0.00 0.00 0.00 00:38:46.699 00:38:47.635 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:47.635 Nvme0n1 : 7.00 15508.57 60.58 0.00 0.00 0.00 0.00 0.00 00:38:47.635 [2024-11-19T15:44:37.974Z] =================================================================================================================== 00:38:47.635 [2024-11-19T15:44:37.974Z] Total : 15508.57 60.58 0.00 0.00 0.00 0.00 0.00 00:38:47.635 00:38:48.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:48.569 Nvme0n1 : 8.00 15538.50 60.70 0.00 0.00 0.00 0.00 0.00 00:38:48.569 [2024-11-19T15:44:38.908Z] =================================================================================================================== 00:38:48.569 [2024-11-19T15:44:38.908Z] Total : 15538.50 60.70 0.00 0.00 0.00 0.00 0.00 00:38:48.569 00:38:49.502 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:49.502 Nvme0n1 : 9.00 15537.33 60.69 0.00 0.00 0.00 0.00 0.00 00:38:49.502 [2024-11-19T15:44:39.841Z] =================================================================================================================== 00:38:49.502 [2024-11-19T15:44:39.841Z] Total : 15537.33 60.69 0.00 0.00 0.00 0.00 0.00 00:38:49.502 00:38:50.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:50.437 Nvme0n1 : 10.00 15558.40 60.77 0.00 0.00 0.00 0.00 0.00 00:38:50.437 [2024-11-19T15:44:40.776Z] =================================================================================================================== 00:38:50.437 [2024-11-19T15:44:40.776Z] Total : 15558.40 60.77 0.00 0.00 0.00 0.00 0.00 00:38:50.437 00:38:50.437 00:38:50.437 Latency(us) 00:38:50.437 [2024-11-19T15:44:40.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.437 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:50.437 Nvme0n1 : 10.01 15561.53 60.79 0.00 0.00 8220.86 4271.98 17961.72 00:38:50.437 [2024-11-19T15:44:40.776Z] =================================================================================================================== 00:38:50.437 [2024-11-19T15:44:40.776Z] Total : 15561.53 60.79 0.00 0.00 8220.86 4271.98 17961.72 00:38:50.437 { 00:38:50.437 "results": [ 00:38:50.437 { 00:38:50.437 "job": "Nvme0n1", 00:38:50.437 "core_mask": "0x2", 00:38:50.437 "workload": "randwrite", 00:38:50.437 "status": "finished", 00:38:50.437 "queue_depth": 128, 00:38:50.437 "io_size": 4096, 00:38:50.437 "runtime": 10.006217, 00:38:50.437 "iops": 15561.525399659033, 00:38:50.437 "mibps": 60.7872085924181, 00:38:50.437 "io_failed": 0, 00:38:50.437 "io_timeout": 0, 00:38:50.437 "avg_latency_us": 8220.860179933325, 00:38:50.437 "min_latency_us": 4271.976296296296, 00:38:50.437 "max_latency_us": 17961.71851851852 00:38:50.437 } 00:38:50.437 ], 00:38:50.437 "core_count": 1 00:38:50.437 } 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 427739 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 427739 ']' 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 427739 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427739 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427739' 00:38:50.437 killing process with pid 427739 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 427739 00:38:50.437 Received shutdown signal, test time was about 10.000000 seconds 00:38:50.437 00:38:50.437 Latency(us) 00:38:50.437 [2024-11-19T15:44:40.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:50.437 [2024-11-19T15:44:40.776Z] =================================================================================================================== 00:38:50.437 [2024-11-19T15:44:40.776Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:50.437 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 427739 00:38:50.695 16:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:50.953 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:51.211 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:51.212 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 425143 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 425143 00:38:51.471 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 425143 Killed "${NVMF_APP[@]}" "$@" 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=429183 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 429183 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 429183 ']' 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:51.471 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:51.471 16:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:51.730 [2024-11-19 16:44:41.852220] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:51.730 [2024-11-19 16:44:41.853243] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:38:51.730 [2024-11-19 16:44:41.853296] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:51.730 [2024-11-19 16:44:41.925160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:51.730 [2024-11-19 16:44:41.967554] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:51.730 [2024-11-19 16:44:41.967628] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:51.730 [2024-11-19 16:44:41.967651] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:51.730 [2024-11-19 16:44:41.967661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:51.730 [2024-11-19 16:44:41.967671] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:51.730 [2024-11-19 16:44:41.968220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:51.730 [2024-11-19 16:44:42.048244] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:51.730 [2024-11-19 16:44:42.048540] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:51.989 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:51.989 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:51.989 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:51.989 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:51.989 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:51.989 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:51.989 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:52.247 [2024-11-19 16:44:42.355042] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:38:52.247 [2024-11-19 16:44:42.355207] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:38:52.247 [2024-11-19 16:44:42.355257] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:38:52.247 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:38:52.247 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev fa427f4d-618a-4789-9163-1d5b13c206af 00:38:52.247 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fa427f4d-618a-4789-9163-1d5b13c206af 00:38:52.247 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:52.247 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:52.247 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:52.247 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:52.247 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:52.505 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fa427f4d-618a-4789-9163-1d5b13c206af -t 2000 00:38:52.764 [ 00:38:52.764 { 00:38:52.764 "name": "fa427f4d-618a-4789-9163-1d5b13c206af", 00:38:52.764 "aliases": [ 00:38:52.764 "lvs/lvol" 00:38:52.764 ], 00:38:52.764 "product_name": "Logical Volume", 00:38:52.764 "block_size": 4096, 00:38:52.764 "num_blocks": 38912, 00:38:52.764 "uuid": "fa427f4d-618a-4789-9163-1d5b13c206af", 00:38:52.764 "assigned_rate_limits": { 00:38:52.764 "rw_ios_per_sec": 0, 00:38:52.764 "rw_mbytes_per_sec": 0, 00:38:52.764 "r_mbytes_per_sec": 0, 00:38:52.764 "w_mbytes_per_sec": 0 00:38:52.764 }, 00:38:52.764 "claimed": false, 00:38:52.764 "zoned": false, 00:38:52.764 "supported_io_types": { 00:38:52.764 "read": true, 00:38:52.764 "write": true, 00:38:52.764 "unmap": true, 00:38:52.764 "flush": false, 00:38:52.764 "reset": true, 00:38:52.764 "nvme_admin": false, 00:38:52.764 "nvme_io": false, 00:38:52.764 "nvme_io_md": false, 00:38:52.764 "write_zeroes": true, 00:38:52.764 "zcopy": false, 00:38:52.764 "get_zone_info": false, 00:38:52.764 "zone_management": false, 00:38:52.764 "zone_append": false, 00:38:52.764 "compare": false, 00:38:52.764 "compare_and_write": false, 00:38:52.764 "abort": false, 00:38:52.764 "seek_hole": true, 00:38:52.764 "seek_data": true, 00:38:52.764 "copy": false, 00:38:52.764 "nvme_iov_md": false 00:38:52.764 }, 00:38:52.764 "driver_specific": { 00:38:52.764 "lvol": { 00:38:52.764 "lvol_store_uuid": "a9325919-7432-45da-bdbe-5752cd80db6f", 00:38:52.764 "base_bdev": "aio_bdev", 00:38:52.764 "thin_provision": false, 00:38:52.764 "num_allocated_clusters": 38, 00:38:52.764 "snapshot": false, 00:38:52.764 "clone": false, 00:38:52.764 "esnap_clone": false 00:38:52.764 } 00:38:52.764 } 00:38:52.764 } 00:38:52.764 ] 00:38:52.764 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:52.764 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:52.764 16:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:38:53.022 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:38:53.022 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:53.022 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:38:53.281 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:38:53.281 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:53.539 [2024-11-19 16:44:43.732729] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:53.539 16:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:53.797 request: 00:38:53.797 { 00:38:53.797 "uuid": "a9325919-7432-45da-bdbe-5752cd80db6f", 00:38:53.797 "method": "bdev_lvol_get_lvstores", 00:38:53.797 "req_id": 1 00:38:53.797 } 00:38:53.797 Got JSON-RPC error response 00:38:53.797 response: 00:38:53.797 { 00:38:53.797 "code": -19, 00:38:53.797 "message": "No such device" 00:38:53.797 } 00:38:53.797 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:38:53.797 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:53.797 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:53.797 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:53.797 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:54.056 aio_bdev 00:38:54.056 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev fa427f4d-618a-4789-9163-1d5b13c206af 00:38:54.056 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=fa427f4d-618a-4789-9163-1d5b13c206af 00:38:54.056 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:54.056 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:38:54.056 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:54.056 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:54.056 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:54.314 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b fa427f4d-618a-4789-9163-1d5b13c206af -t 2000 00:38:54.573 [ 00:38:54.573 { 00:38:54.573 "name": "fa427f4d-618a-4789-9163-1d5b13c206af", 00:38:54.573 "aliases": [ 00:38:54.573 "lvs/lvol" 00:38:54.573 ], 00:38:54.573 "product_name": "Logical Volume", 00:38:54.573 "block_size": 4096, 00:38:54.573 "num_blocks": 38912, 00:38:54.573 "uuid": "fa427f4d-618a-4789-9163-1d5b13c206af", 00:38:54.573 "assigned_rate_limits": { 00:38:54.573 "rw_ios_per_sec": 0, 00:38:54.573 "rw_mbytes_per_sec": 0, 00:38:54.573 "r_mbytes_per_sec": 0, 00:38:54.573 "w_mbytes_per_sec": 0 00:38:54.573 }, 00:38:54.573 "claimed": false, 00:38:54.573 "zoned": false, 00:38:54.573 "supported_io_types": { 00:38:54.573 "read": true, 00:38:54.573 "write": true, 00:38:54.573 "unmap": true, 00:38:54.573 "flush": false, 00:38:54.573 "reset": true, 00:38:54.573 "nvme_admin": false, 00:38:54.573 "nvme_io": false, 00:38:54.573 "nvme_io_md": false, 00:38:54.573 "write_zeroes": true, 00:38:54.573 "zcopy": false, 00:38:54.573 "get_zone_info": false, 00:38:54.573 "zone_management": false, 00:38:54.573 "zone_append": false, 00:38:54.573 "compare": false, 00:38:54.573 "compare_and_write": false, 00:38:54.573 "abort": false, 00:38:54.573 "seek_hole": true, 00:38:54.573 "seek_data": true, 00:38:54.573 "copy": false, 00:38:54.573 "nvme_iov_md": false 00:38:54.573 }, 00:38:54.573 "driver_specific": { 00:38:54.573 "lvol": { 00:38:54.573 "lvol_store_uuid": "a9325919-7432-45da-bdbe-5752cd80db6f", 00:38:54.573 "base_bdev": "aio_bdev", 00:38:54.573 "thin_provision": false, 00:38:54.573 "num_allocated_clusters": 38, 00:38:54.573 "snapshot": false, 00:38:54.573 "clone": false, 00:38:54.573 "esnap_clone": false 00:38:54.573 } 00:38:54.573 } 00:38:54.573 } 00:38:54.573 ] 00:38:54.573 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:38:54.573 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:54.573 16:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:54.832 16:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:54.832 16:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:54.832 16:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:55.398 16:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:55.398 16:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete fa427f4d-618a-4789-9163-1d5b13c206af 00:38:55.398 16:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a9325919-7432-45da-bdbe-5752cd80db6f 00:38:55.656 16:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:56.223 00:38:56.223 real 0m19.516s 00:38:56.223 user 0m36.408s 00:38:56.223 sys 0m4.743s 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:56.223 ************************************ 00:38:56.223 END TEST lvs_grow_dirty 00:38:56.223 ************************************ 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:38:56.223 nvmf_trace.0 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:56.223 rmmod nvme_tcp 00:38:56.223 rmmod nvme_fabrics 00:38:56.223 rmmod nvme_keyring 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 429183 ']' 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 429183 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 429183 ']' 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 429183 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429183 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429183' 00:38:56.223 killing process with pid 429183 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 429183 00:38:56.223 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 429183 00:38:56.483 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:56.483 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:56.483 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:56.483 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:38:56.483 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:38:56.483 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:56.483 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:38:56.483 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:56.484 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:56.484 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:56.484 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:56.484 16:44:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.391 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:58.391 00:38:58.391 real 0m42.953s 00:38:58.391 user 0m55.346s 00:38:58.391 sys 0m8.718s 00:38:58.391 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:58.391 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:58.391 ************************************ 00:38:58.391 END TEST nvmf_lvs_grow 00:38:58.391 ************************************ 00:38:58.391 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:58.391 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:58.391 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:58.391 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:58.391 ************************************ 00:38:58.391 START TEST nvmf_bdev_io_wait 00:38:58.391 ************************************ 00:38:58.391 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:38:58.649 * Looking for test storage... 00:38:58.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:58.649 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:58.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.650 --rc genhtml_branch_coverage=1 00:38:58.650 --rc genhtml_function_coverage=1 00:38:58.650 --rc genhtml_legend=1 00:38:58.650 --rc geninfo_all_blocks=1 00:38:58.650 --rc geninfo_unexecuted_blocks=1 00:38:58.650 00:38:58.650 ' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:58.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.650 --rc genhtml_branch_coverage=1 00:38:58.650 --rc genhtml_function_coverage=1 00:38:58.650 --rc genhtml_legend=1 00:38:58.650 --rc geninfo_all_blocks=1 00:38:58.650 --rc geninfo_unexecuted_blocks=1 00:38:58.650 00:38:58.650 ' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:58.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.650 --rc genhtml_branch_coverage=1 00:38:58.650 --rc genhtml_function_coverage=1 00:38:58.650 --rc genhtml_legend=1 00:38:58.650 --rc geninfo_all_blocks=1 00:38:58.650 --rc geninfo_unexecuted_blocks=1 00:38:58.650 00:38:58.650 ' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:58.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:58.650 --rc genhtml_branch_coverage=1 00:38:58.650 --rc genhtml_function_coverage=1 00:38:58.650 --rc genhtml_legend=1 00:38:58.650 --rc geninfo_all_blocks=1 00:38:58.650 --rc geninfo_unexecuted_blocks=1 00:38:58.650 00:38:58.650 ' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:38:58.650 16:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:01.182 16:44:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:01.182 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:01.182 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:01.182 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:01.182 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:01.182 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:01.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:01.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:39:01.183 00:39:01.183 --- 10.0.0.2 ping statistics --- 00:39:01.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.183 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:01.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:01.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:39:01.183 00:39:01.183 --- 10.0.0.1 ping statistics --- 00:39:01.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:01.183 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=431700 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 431700 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 431700 ']' 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:01.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.183 [2024-11-19 16:44:51.271449] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:01.183 [2024-11-19 16:44:51.272522] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:01.183 [2024-11-19 16:44:51.272571] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:01.183 [2024-11-19 16:44:51.346176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:01.183 [2024-11-19 16:44:51.392494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:01.183 [2024-11-19 16:44:51.392542] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:01.183 [2024-11-19 16:44:51.392566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:01.183 [2024-11-19 16:44:51.392577] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:01.183 [2024-11-19 16:44:51.392587] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:01.183 [2024-11-19 16:44:51.394196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:01.183 [2024-11-19 16:44:51.394221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:01.183 [2024-11-19 16:44:51.394282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:01.183 [2024-11-19 16:44:51.394286] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:01.183 [2024-11-19 16:44:51.394826] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.183 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.444 [2024-11-19 16:44:51.582359] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:01.444 [2024-11-19 16:44:51.582548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:01.444 [2024-11-19 16:44:51.583499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:01.444 [2024-11-19 16:44:51.584313] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.444 [2024-11-19 16:44:51.591083] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.444 Malloc0 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:01.444 [2024-11-19 16:44:51.647247] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=431730 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=431732 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=431734 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.444 { 00:39:01.444 "params": { 00:39:01.444 "name": "Nvme$subsystem", 00:39:01.444 "trtype": "$TEST_TRANSPORT", 00:39:01.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.444 "adrfam": "ipv4", 00:39:01.444 "trsvcid": "$NVMF_PORT", 00:39:01.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.444 "hdgst": ${hdgst:-false}, 00:39:01.444 "ddgst": ${ddgst:-false} 00:39:01.444 }, 00:39:01.444 "method": "bdev_nvme_attach_controller" 00:39:01.444 } 00:39:01.444 EOF 00:39:01.444 )") 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=431736 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.444 { 00:39:01.444 "params": { 00:39:01.444 "name": "Nvme$subsystem", 00:39:01.444 "trtype": "$TEST_TRANSPORT", 00:39:01.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.444 "adrfam": "ipv4", 00:39:01.444 "trsvcid": "$NVMF_PORT", 00:39:01.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.444 "hdgst": ${hdgst:-false}, 00:39:01.444 "ddgst": ${ddgst:-false} 00:39:01.444 }, 00:39:01.444 "method": "bdev_nvme_attach_controller" 00:39:01.444 } 00:39:01.444 EOF 00:39:01.444 )") 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.444 { 00:39:01.444 "params": { 00:39:01.444 "name": "Nvme$subsystem", 00:39:01.444 "trtype": "$TEST_TRANSPORT", 00:39:01.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.444 "adrfam": "ipv4", 00:39:01.444 "trsvcid": "$NVMF_PORT", 00:39:01.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.444 "hdgst": ${hdgst:-false}, 00:39:01.444 "ddgst": ${ddgst:-false} 00:39:01.444 }, 00:39:01.444 "method": "bdev_nvme_attach_controller" 00:39:01.444 } 00:39:01.444 EOF 00:39:01.444 )") 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:01.444 { 00:39:01.444 "params": { 00:39:01.444 "name": "Nvme$subsystem", 00:39:01.444 "trtype": "$TEST_TRANSPORT", 00:39:01.444 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:01.444 "adrfam": "ipv4", 00:39:01.444 "trsvcid": "$NVMF_PORT", 00:39:01.444 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:01.444 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:01.444 "hdgst": ${hdgst:-false}, 00:39:01.444 "ddgst": ${ddgst:-false} 00:39:01.444 }, 00:39:01.444 "method": "bdev_nvme_attach_controller" 00:39:01.444 } 00:39:01.444 EOF 00:39:01.444 )") 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:01.444 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 431730 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:01.445 "params": { 00:39:01.445 "name": "Nvme1", 00:39:01.445 "trtype": "tcp", 00:39:01.445 "traddr": "10.0.0.2", 00:39:01.445 "adrfam": "ipv4", 00:39:01.445 "trsvcid": "4420", 00:39:01.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:01.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:01.445 "hdgst": false, 00:39:01.445 "ddgst": false 00:39:01.445 }, 00:39:01.445 "method": "bdev_nvme_attach_controller" 00:39:01.445 }' 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:01.445 "params": { 00:39:01.445 "name": "Nvme1", 00:39:01.445 "trtype": "tcp", 00:39:01.445 "traddr": "10.0.0.2", 00:39:01.445 "adrfam": "ipv4", 00:39:01.445 "trsvcid": "4420", 00:39:01.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:01.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:01.445 "hdgst": false, 00:39:01.445 "ddgst": false 00:39:01.445 }, 00:39:01.445 "method": "bdev_nvme_attach_controller" 00:39:01.445 }' 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:01.445 "params": { 00:39:01.445 "name": "Nvme1", 00:39:01.445 "trtype": "tcp", 00:39:01.445 "traddr": "10.0.0.2", 00:39:01.445 "adrfam": "ipv4", 00:39:01.445 "trsvcid": "4420", 00:39:01.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:01.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:01.445 "hdgst": false, 00:39:01.445 "ddgst": false 00:39:01.445 }, 00:39:01.445 "method": "bdev_nvme_attach_controller" 00:39:01.445 }' 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:01.445 16:44:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:01.445 "params": { 00:39:01.445 "name": "Nvme1", 00:39:01.445 "trtype": "tcp", 00:39:01.445 "traddr": "10.0.0.2", 00:39:01.445 "adrfam": "ipv4", 00:39:01.445 "trsvcid": "4420", 00:39:01.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:01.445 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:01.445 "hdgst": false, 00:39:01.445 "ddgst": false 00:39:01.445 }, 00:39:01.445 "method": "bdev_nvme_attach_controller" 00:39:01.445 }' 00:39:01.445 [2024-11-19 16:44:51.697730] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:01.445 [2024-11-19 16:44:51.697811] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:01.445 [2024-11-19 16:44:51.697906] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:01.445 [2024-11-19 16:44:51.697906] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:01.445 [2024-11-19 16:44:51.697906] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:01.445 [2024-11-19 16:44:51.697985] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 16:44:51.697986] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-19 16:44:51.697986] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:01.445 --proc-type=auto ] 00:39:01.445 --proc-type=auto ] 00:39:01.703 [2024-11-19 16:44:51.879217] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.703 [2024-11-19 16:44:51.923000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:39:01.703 [2024-11-19 16:44:51.984013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.703 [2024-11-19 16:44:52.026826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:39:01.962 [2024-11-19 16:44:52.084010] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.962 [2024-11-19 16:44:52.127879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:39:01.962 [2024-11-19 16:44:52.156454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:01.962 [2024-11-19 16:44:52.194143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:39:02.220 Running I/O for 1 seconds... 00:39:02.220 Running I/O for 1 seconds... 00:39:02.220 Running I/O for 1 seconds... 00:39:02.220 Running I/O for 1 seconds... 00:39:03.155 6956.00 IOPS, 27.17 MiB/s 00:39:03.155 Latency(us) 00:39:03.155 [2024-11-19T15:44:53.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.156 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:03.156 Nvme1n1 : 1.03 6896.03 26.94 0.00 0.00 18309.13 4271.98 33593.27 00:39:03.156 [2024-11-19T15:44:53.495Z] =================================================================================================================== 00:39:03.156 [2024-11-19T15:44:53.495Z] Total : 6896.03 26.94 0.00 0.00 18309.13 4271.98 33593.27 00:39:03.156 9701.00 IOPS, 37.89 MiB/s 00:39:03.156 Latency(us) 00:39:03.156 [2024-11-19T15:44:53.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.156 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:03.156 Nvme1n1 : 1.01 9742.46 38.06 0.00 0.00 13074.75 4805.97 17670.45 00:39:03.156 [2024-11-19T15:44:53.495Z] =================================================================================================================== 00:39:03.156 [2024-11-19T15:44:53.495Z] Total : 9742.46 38.06 0.00 0.00 13074.75 4805.97 17670.45 00:39:03.156 6693.00 IOPS, 26.14 MiB/s 00:39:03.156 Latency(us) 00:39:03.156 [2024-11-19T15:44:53.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.156 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:03.156 Nvme1n1 : 1.01 6810.27 26.60 0.00 0.00 18737.49 4538.97 37671.06 00:39:03.156 [2024-11-19T15:44:53.495Z] =================================================================================================================== 00:39:03.156 [2024-11-19T15:44:53.495Z] Total : 6810.27 26.60 0.00 0.00 18737.49 4538.97 37671.06 00:39:03.156 193008.00 IOPS, 753.94 MiB/s 00:39:03.156 Latency(us) 00:39:03.156 [2024-11-19T15:44:53.495Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:03.156 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:03.156 Nvme1n1 : 1.00 192649.39 752.54 0.00 0.00 660.85 297.34 1844.72 00:39:03.156 [2024-11-19T15:44:53.495Z] =================================================================================================================== 00:39:03.156 [2024-11-19T15:44:53.495Z] Total : 192649.39 752.54 0.00 0.00 660.85 297.34 1844.72 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 431732 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 431734 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 431736 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:03.414 rmmod nvme_tcp 00:39:03.414 rmmod nvme_fabrics 00:39:03.414 rmmod nvme_keyring 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 431700 ']' 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 431700 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 431700 ']' 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 431700 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:03.414 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:03.415 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 431700 00:39:03.415 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:03.415 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:03.415 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 431700' 00:39:03.415 killing process with pid 431700 00:39:03.415 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 431700 00:39:03.415 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 431700 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:03.675 16:44:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.217 16:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:06.217 00:39:06.217 real 0m7.232s 00:39:06.217 user 0m14.189s 00:39:06.217 sys 0m3.915s 00:39:06.217 16:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:06.217 16:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:06.217 ************************************ 00:39:06.217 END TEST nvmf_bdev_io_wait 00:39:06.217 ************************************ 00:39:06.217 16:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:06.217 16:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:06.217 16:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:06.217 16:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:06.217 ************************************ 00:39:06.217 START TEST nvmf_queue_depth 00:39:06.217 ************************************ 00:39:06.217 16:44:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:06.217 * Looking for test storage... 00:39:06.217 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:06.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.217 --rc genhtml_branch_coverage=1 00:39:06.217 --rc genhtml_function_coverage=1 00:39:06.217 --rc genhtml_legend=1 00:39:06.217 --rc geninfo_all_blocks=1 00:39:06.217 --rc geninfo_unexecuted_blocks=1 00:39:06.217 00:39:06.217 ' 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:06.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.217 --rc genhtml_branch_coverage=1 00:39:06.217 --rc genhtml_function_coverage=1 00:39:06.217 --rc genhtml_legend=1 00:39:06.217 --rc geninfo_all_blocks=1 00:39:06.217 --rc geninfo_unexecuted_blocks=1 00:39:06.217 00:39:06.217 ' 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:06.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.217 --rc genhtml_branch_coverage=1 00:39:06.217 --rc genhtml_function_coverage=1 00:39:06.217 --rc genhtml_legend=1 00:39:06.217 --rc geninfo_all_blocks=1 00:39:06.217 --rc geninfo_unexecuted_blocks=1 00:39:06.217 00:39:06.217 ' 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:06.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:06.217 --rc genhtml_branch_coverage=1 00:39:06.217 --rc genhtml_function_coverage=1 00:39:06.217 --rc genhtml_legend=1 00:39:06.217 --rc geninfo_all_blocks=1 00:39:06.217 --rc geninfo_unexecuted_blocks=1 00:39:06.217 00:39:06.217 ' 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:06.217 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:06.218 16:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:08.121 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:08.121 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:08.121 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:08.121 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:08.121 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:08.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:08.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:39:08.122 00:39:08.122 --- 10.0.0.2 ping statistics --- 00:39:08.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:08.122 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:08.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:08.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:39:08.122 00:39:08.122 --- 10.0.0.1 ping statistics --- 00:39:08.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:08.122 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=433941 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 433941 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 433941 ']' 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:08.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.122 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.122 [2024-11-19 16:44:58.417723] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:08.122 [2024-11-19 16:44:58.418803] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:08.122 [2024-11-19 16:44:58.418862] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:08.382 [2024-11-19 16:44:58.492887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.382 [2024-11-19 16:44:58.534646] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:08.382 [2024-11-19 16:44:58.534707] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:08.382 [2024-11-19 16:44:58.534730] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:08.382 [2024-11-19 16:44:58.534740] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:08.382 [2024-11-19 16:44:58.534749] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:08.382 [2024-11-19 16:44:58.535397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:08.382 [2024-11-19 16:44:58.617440] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:08.382 [2024-11-19 16:44:58.617728] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.382 [2024-11-19 16:44:58.671981] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.382 Malloc0 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.382 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.641 [2024-11-19 16:44:58.732095] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=433979 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:08.641 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:08.642 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 433979 /var/tmp/bdevperf.sock 00:39:08.642 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 433979 ']' 00:39:08.642 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:08.642 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:08.642 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:08.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:08.642 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:08.642 16:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:08.642 [2024-11-19 16:44:58.778815] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:08.642 [2024-11-19 16:44:58.778880] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid433979 ] 00:39:08.642 [2024-11-19 16:44:58.844507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:08.642 [2024-11-19 16:44:58.890543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:08.900 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:08.900 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:08.900 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:08.900 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:08.900 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:09.157 NVMe0n1 00:39:09.157 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:09.157 16:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:09.157 Running I/O for 10 seconds... 00:39:11.474 8192.00 IOPS, 32.00 MiB/s [2024-11-19T15:45:02.755Z] 8199.00 IOPS, 32.03 MiB/s [2024-11-19T15:45:03.692Z] 8467.00 IOPS, 33.07 MiB/s [2024-11-19T15:45:04.630Z] 8448.00 IOPS, 33.00 MiB/s [2024-11-19T15:45:05.566Z] 8547.80 IOPS, 33.39 MiB/s [2024-11-19T15:45:06.506Z] 8533.83 IOPS, 33.34 MiB/s [2024-11-19T15:45:07.881Z] 8591.86 IOPS, 33.56 MiB/s [2024-11-19T15:45:08.451Z] 8578.12 IOPS, 33.51 MiB/s [2024-11-19T15:45:09.827Z] 8621.44 IOPS, 33.68 MiB/s [2024-11-19T15:45:09.827Z] 8604.70 IOPS, 33.61 MiB/s 00:39:19.488 Latency(us) 00:39:19.488 [2024-11-19T15:45:09.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.488 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:19.488 Verification LBA range: start 0x0 length 0x4000 00:39:19.488 NVMe0n1 : 10.07 8647.14 33.78 0.00 0.00 117951.39 13398.47 71846.87 00:39:19.488 [2024-11-19T15:45:09.827Z] =================================================================================================================== 00:39:19.488 [2024-11-19T15:45:09.827Z] Total : 8647.14 33.78 0.00 0.00 117951.39 13398.47 71846.87 00:39:19.488 { 00:39:19.488 "results": [ 00:39:19.488 { 00:39:19.488 "job": "NVMe0n1", 00:39:19.488 "core_mask": "0x1", 00:39:19.488 "workload": "verify", 00:39:19.488 "status": "finished", 00:39:19.488 "verify_range": { 00:39:19.488 "start": 0, 00:39:19.488 "length": 16384 00:39:19.488 }, 00:39:19.488 "queue_depth": 1024, 00:39:19.488 "io_size": 4096, 00:39:19.488 "runtime": 10.069337, 00:39:19.488 "iops": 8647.14330248357, 00:39:19.488 "mibps": 33.777903525326444, 00:39:19.488 "io_failed": 0, 00:39:19.488 "io_timeout": 0, 00:39:19.488 "avg_latency_us": 117951.39405568124, 00:39:19.488 "min_latency_us": 13398.471111111112, 00:39:19.488 "max_latency_us": 71846.87407407408 00:39:19.488 } 00:39:19.488 ], 00:39:19.488 "core_count": 1 00:39:19.488 } 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 433979 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 433979 ']' 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 433979 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 433979 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 433979' 00:39:19.488 killing process with pid 433979 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 433979 00:39:19.488 Received shutdown signal, test time was about 10.000000 seconds 00:39:19.488 00:39:19.488 Latency(us) 00:39:19.488 [2024-11-19T15:45:09.827Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.488 [2024-11-19T15:45:09.827Z] =================================================================================================================== 00:39:19.488 [2024-11-19T15:45:09.827Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 433979 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:19.488 rmmod nvme_tcp 00:39:19.488 rmmod nvme_fabrics 00:39:19.488 rmmod nvme_keyring 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 433941 ']' 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 433941 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 433941 ']' 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 433941 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:19.488 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 433941 00:39:19.749 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:19.749 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:19.749 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 433941' 00:39:19.749 killing process with pid 433941 00:39:19.749 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 433941 00:39:19.749 16:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 433941 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:19.749 16:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:22.290 00:39:22.290 real 0m16.087s 00:39:22.290 user 0m22.331s 00:39:22.290 sys 0m3.385s 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:22.290 ************************************ 00:39:22.290 END TEST nvmf_queue_depth 00:39:22.290 ************************************ 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:22.290 ************************************ 00:39:22.290 START TEST nvmf_target_multipath 00:39:22.290 ************************************ 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:22.290 * Looking for test storage... 00:39:22.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:22.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.290 --rc genhtml_branch_coverage=1 00:39:22.290 --rc genhtml_function_coverage=1 00:39:22.290 --rc genhtml_legend=1 00:39:22.290 --rc geninfo_all_blocks=1 00:39:22.290 --rc geninfo_unexecuted_blocks=1 00:39:22.290 00:39:22.290 ' 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:22.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.290 --rc genhtml_branch_coverage=1 00:39:22.290 --rc genhtml_function_coverage=1 00:39:22.290 --rc genhtml_legend=1 00:39:22.290 --rc geninfo_all_blocks=1 00:39:22.290 --rc geninfo_unexecuted_blocks=1 00:39:22.290 00:39:22.290 ' 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:22.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.290 --rc genhtml_branch_coverage=1 00:39:22.290 --rc genhtml_function_coverage=1 00:39:22.290 --rc genhtml_legend=1 00:39:22.290 --rc geninfo_all_blocks=1 00:39:22.290 --rc geninfo_unexecuted_blocks=1 00:39:22.290 00:39:22.290 ' 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:22.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.290 --rc genhtml_branch_coverage=1 00:39:22.290 --rc genhtml_function_coverage=1 00:39:22.290 --rc genhtml_legend=1 00:39:22.290 --rc geninfo_all_blocks=1 00:39:22.290 --rc geninfo_unexecuted_blocks=1 00:39:22.290 00:39:22.290 ' 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:22.290 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:22.291 16:45:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:24.197 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:24.197 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:24.197 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:24.198 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:24.198 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:24.198 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:24.457 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:24.457 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:24.457 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:24.457 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:24.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:24.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:39:24.457 00:39:24.457 --- 10.0.0.2 ping statistics --- 00:39:24.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.457 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:39:24.457 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:24.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:24.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:39:24.457 00:39:24.457 --- 10.0.0.1 ping statistics --- 00:39:24.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:24.457 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:39:24.457 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:24.457 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:24.457 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:24.458 only one NIC for nvmf test 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:24.458 rmmod nvme_tcp 00:39:24.458 rmmod nvme_fabrics 00:39:24.458 rmmod nvme_keyring 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:24.458 16:45:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:26.368 00:39:26.368 real 0m4.556s 00:39:26.368 user 0m0.937s 00:39:26.368 sys 0m1.638s 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:26.368 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:26.368 ************************************ 00:39:26.368 END TEST nvmf_target_multipath 00:39:26.368 ************************************ 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:26.628 ************************************ 00:39:26.628 START TEST nvmf_zcopy 00:39:26.628 ************************************ 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:26.628 * Looking for test storage... 00:39:26.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:26.628 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:26.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.629 --rc genhtml_branch_coverage=1 00:39:26.629 --rc genhtml_function_coverage=1 00:39:26.629 --rc genhtml_legend=1 00:39:26.629 --rc geninfo_all_blocks=1 00:39:26.629 --rc geninfo_unexecuted_blocks=1 00:39:26.629 00:39:26.629 ' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:26.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.629 --rc genhtml_branch_coverage=1 00:39:26.629 --rc genhtml_function_coverage=1 00:39:26.629 --rc genhtml_legend=1 00:39:26.629 --rc geninfo_all_blocks=1 00:39:26.629 --rc geninfo_unexecuted_blocks=1 00:39:26.629 00:39:26.629 ' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:26.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.629 --rc genhtml_branch_coverage=1 00:39:26.629 --rc genhtml_function_coverage=1 00:39:26.629 --rc genhtml_legend=1 00:39:26.629 --rc geninfo_all_blocks=1 00:39:26.629 --rc geninfo_unexecuted_blocks=1 00:39:26.629 00:39:26.629 ' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:26.629 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:26.629 --rc genhtml_branch_coverage=1 00:39:26.629 --rc genhtml_function_coverage=1 00:39:26.629 --rc genhtml_legend=1 00:39:26.629 --rc geninfo_all_blocks=1 00:39:26.629 --rc geninfo_unexecuted_blocks=1 00:39:26.629 00:39:26.629 ' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:26.629 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:26.630 16:45:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:29.164 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:29.164 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:29.164 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:29.165 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:29.165 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:29.165 16:45:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:29.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:29.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:39:29.165 00:39:29.165 --- 10.0.0.2 ping statistics --- 00:39:29.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.165 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:29.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:29.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:39:29.165 00:39:29.165 --- 10.0.0.1 ping statistics --- 00:39:29.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:29.165 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=439177 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 439177 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 439177 ']' 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:29.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:29.165 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:29.165 [2024-11-19 16:45:19.285623] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:29.165 [2024-11-19 16:45:19.286707] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:29.165 [2024-11-19 16:45:19.286762] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:29.165 [2024-11-19 16:45:19.361443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.165 [2024-11-19 16:45:19.406321] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:29.165 [2024-11-19 16:45:19.406386] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:29.165 [2024-11-19 16:45:19.406410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:29.165 [2024-11-19 16:45:19.406422] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:29.165 [2024-11-19 16:45:19.406432] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:29.165 [2024-11-19 16:45:19.407011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:29.165 [2024-11-19 16:45:19.495240] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:29.165 [2024-11-19 16:45:19.495599] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:29.425 [2024-11-19 16:45:19.547576] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:29.425 [2024-11-19 16:45:19.563774] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:29.425 malloc0 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:29.425 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:29.425 { 00:39:29.425 "params": { 00:39:29.425 "name": "Nvme$subsystem", 00:39:29.425 "trtype": "$TEST_TRANSPORT", 00:39:29.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:29.425 "adrfam": "ipv4", 00:39:29.425 "trsvcid": "$NVMF_PORT", 00:39:29.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:29.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:29.426 "hdgst": ${hdgst:-false}, 00:39:29.426 "ddgst": ${ddgst:-false} 00:39:29.426 }, 00:39:29.426 "method": "bdev_nvme_attach_controller" 00:39:29.426 } 00:39:29.426 EOF 00:39:29.426 )") 00:39:29.426 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:29.426 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:29.426 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:29.426 16:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:29.426 "params": { 00:39:29.426 "name": "Nvme1", 00:39:29.426 "trtype": "tcp", 00:39:29.426 "traddr": "10.0.0.2", 00:39:29.426 "adrfam": "ipv4", 00:39:29.426 "trsvcid": "4420", 00:39:29.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:29.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:29.426 "hdgst": false, 00:39:29.426 "ddgst": false 00:39:29.426 }, 00:39:29.426 "method": "bdev_nvme_attach_controller" 00:39:29.426 }' 00:39:29.426 [2024-11-19 16:45:19.647520] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:29.426 [2024-11-19 16:45:19.647603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid439315 ] 00:39:29.426 [2024-11-19 16:45:19.715613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:29.685 [2024-11-19 16:45:19.764276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:29.685 Running I/O for 10 seconds... 00:39:31.998 5674.00 IOPS, 44.33 MiB/s [2024-11-19T15:45:23.275Z] 5724.50 IOPS, 44.72 MiB/s [2024-11-19T15:45:24.208Z] 5718.00 IOPS, 44.67 MiB/s [2024-11-19T15:45:25.144Z] 5738.25 IOPS, 44.83 MiB/s [2024-11-19T15:45:26.084Z] 5730.80 IOPS, 44.77 MiB/s [2024-11-19T15:45:27.021Z] 5732.33 IOPS, 44.78 MiB/s [2024-11-19T15:45:28.051Z] 5732.57 IOPS, 44.79 MiB/s [2024-11-19T15:45:29.430Z] 5731.62 IOPS, 44.78 MiB/s [2024-11-19T15:45:30.369Z] 5732.33 IOPS, 44.78 MiB/s [2024-11-19T15:45:30.369Z] 5732.90 IOPS, 44.79 MiB/s 00:39:40.030 Latency(us) 00:39:40.030 [2024-11-19T15:45:30.369Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:40.030 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:39:40.030 Verification LBA range: start 0x0 length 0x1000 00:39:40.030 Nvme1n1 : 10.02 5734.79 44.80 0.00 0.00 22258.64 1225.77 28932.93 00:39:40.030 [2024-11-19T15:45:30.369Z] =================================================================================================================== 00:39:40.030 [2024-11-19T15:45:30.369Z] Total : 5734.79 44.80 0.00 0.00 22258.64 1225.77 28932.93 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=440456 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:40.030 { 00:39:40.030 "params": { 00:39:40.030 "name": "Nvme$subsystem", 00:39:40.030 "trtype": "$TEST_TRANSPORT", 00:39:40.030 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:40.030 "adrfam": "ipv4", 00:39:40.030 "trsvcid": "$NVMF_PORT", 00:39:40.030 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:40.030 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:40.030 "hdgst": ${hdgst:-false}, 00:39:40.030 "ddgst": ${ddgst:-false} 00:39:40.030 }, 00:39:40.030 "method": "bdev_nvme_attach_controller" 00:39:40.030 } 00:39:40.030 EOF 00:39:40.030 )") 00:39:40.030 [2024-11-19 16:45:30.231550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.231589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:40.030 16:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:40.030 "params": { 00:39:40.030 "name": "Nvme1", 00:39:40.030 "trtype": "tcp", 00:39:40.030 "traddr": "10.0.0.2", 00:39:40.030 "adrfam": "ipv4", 00:39:40.030 "trsvcid": "4420", 00:39:40.030 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:40.030 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:40.030 "hdgst": false, 00:39:40.030 "ddgst": false 00:39:40.030 }, 00:39:40.030 "method": "bdev_nvme_attach_controller" 00:39:40.030 }' 00:39:40.030 [2024-11-19 16:45:30.239490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.239512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.247487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.247507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.255493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.255515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.263487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.263507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.271407] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:40.030 [2024-11-19 16:45:30.271482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid440456 ] 00:39:40.030 [2024-11-19 16:45:30.271491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.271510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.279490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.279526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.287486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.287506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.295470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.295489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.303487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.303507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.311488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.311507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.319487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.319507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.327487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.327506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.335502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.335522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.340457] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:40.030 [2024-11-19 16:45:30.343486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.343505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.351519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.351553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.030 [2024-11-19 16:45:30.359521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.030 [2024-11-19 16:45:30.359548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.290 [2024-11-19 16:45:30.367507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.290 [2024-11-19 16:45:30.367541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.375487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.375508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.383487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.383507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.390297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:40.291 [2024-11-19 16:45:30.391488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.391508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.399470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.399489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.407514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.407547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.415511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.415543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.423517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.423551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.431515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.431548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.439514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.439550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.447510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.447545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.455500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.455534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.463488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.463508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.471493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.471526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.479510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.479542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.487495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.487518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.495488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.495507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.503496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.503520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.511516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.511540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.519492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.519515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.527478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.527500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.535491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.535513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.543489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.543510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.551487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.551506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.559491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.559512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.567489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.567509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.575509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.575532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.583494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.583516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.591492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.591515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.599826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.599854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 [2024-11-19 16:45:30.607511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.607535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.291 Running I/O for 5 seconds... 00:39:40.291 [2024-11-19 16:45:30.623895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.291 [2024-11-19 16:45:30.623925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.633918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.633946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.650143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.650182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.664787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.664815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.675046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.675108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.686931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.686959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.697440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.697473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.713023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.713064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.722377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.722404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.737400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.737426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.747083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.747114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.758994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.759021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.771783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.771811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.781502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.781529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.793434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.793463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.807573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.807600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.817470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.817497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.833263] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.833290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.843192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.843219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.855286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.855315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.866237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.866264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.551 [2024-11-19 16:45:30.880611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.551 [2024-11-19 16:45:30.880638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:30.890032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:30.890060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:30.901781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:30.901808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:30.916463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:30.916492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:30.926626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:30.926669] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:30.940852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:30.940879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:30.950239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:30.950267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:30.964059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:30.964096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:30.974213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:30.974240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:30.989328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:30.989370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:30.998705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:30.998733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.014790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.014816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.024538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.024563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.036377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.036418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.046992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.047018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.058169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.058197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.072672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.072701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.082530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.082557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.096430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.096475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.106489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.106515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.120164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.120191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.129789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.129815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:40.810 [2024-11-19 16:45:31.144000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:40.810 [2024-11-19 16:45:31.144027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.153946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.153998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.168355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.168382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.178757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.178783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.192167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.192195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.201995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.202022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.216831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.216857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.226618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.226644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.240401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.240428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.250079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.250107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.263617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.263643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.273438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.273463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.284952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.284978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.302344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.302385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.317701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.317728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.327350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.327376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.338702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.338729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.352439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.352467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.361668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.361695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.373311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.373339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.388745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.388801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.071 [2024-11-19 16:45:31.397904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.071 [2024-11-19 16:45:31.397930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.409542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.332 [2024-11-19 16:45:31.409569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.424282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.332 [2024-11-19 16:45:31.424309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.433660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.332 [2024-11-19 16:45:31.433687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.445636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.332 [2024-11-19 16:45:31.445662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.461895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.332 [2024-11-19 16:45:31.461921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.477763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.332 [2024-11-19 16:45:31.477791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.495530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.332 [2024-11-19 16:45:31.495556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.505906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.332 [2024-11-19 16:45:31.505931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.519698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.332 [2024-11-19 16:45:31.519724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.528768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.332 [2024-11-19 16:45:31.528807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.332 [2024-11-19 16:45:31.540458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.540483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.333 [2024-11-19 16:45:31.551030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.551058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.333 [2024-11-19 16:45:31.563793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.563820] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.333 [2024-11-19 16:45:31.573680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.573707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.333 [2024-11-19 16:45:31.585547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.585572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.333 [2024-11-19 16:45:31.600585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.600611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.333 [2024-11-19 16:45:31.609895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.609920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.333 11637.00 IOPS, 90.91 MiB/s [2024-11-19T15:45:31.672Z] [2024-11-19 16:45:31.625194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.625221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.333 [2024-11-19 16:45:31.634796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.634822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.333 [2024-11-19 16:45:31.646288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.646315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.333 [2024-11-19 16:45:31.661182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.333 [2024-11-19 16:45:31.661208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.670662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.670690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.686318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.686346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.700962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.700989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.710066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.710102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.721656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.721682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.737122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.737149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.746221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.746248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.757901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.757926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.773028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.773079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.782639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.782666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.796423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.796449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.805687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.805714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.817380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.817420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.833662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.833688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.851597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.851624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.862228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.862255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.877208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.877237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.895035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.895063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.906867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.906895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.592 [2024-11-19 16:45:31.916417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.592 [2024-11-19 16:45:31.916442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:31.928616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:31.928641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:31.944860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:31.944886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:31.954347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:31.954388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:31.968770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:31.968795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:31.978312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:31.978339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:31.991952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:31.991992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.001914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.001938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.015967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.015991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.025176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.025202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.041236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.041264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.051094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.051145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.062957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.062984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.073989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.074016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.087096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.087147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.096452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.096477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.108266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.108295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.118798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.118823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.130190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.130218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.143914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.143941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.153620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.153646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.166368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.851 [2024-11-19 16:45:32.166395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:41.851 [2024-11-19 16:45:32.181490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:41.852 [2024-11-19 16:45:32.181516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.191276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.191304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.203391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.203418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.213997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.214023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.229034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.229082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.238558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.238583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.250659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.250699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.265032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.265085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.274433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.274458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.286921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.286947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.298109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.298136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.311407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.311443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.321434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.321459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.332950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.332976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.349200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.349227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.358990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.359025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.370770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.370795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.381863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.381888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.396456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.396482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.406170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.406197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.420349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.420391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.430557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.430583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.112 [2024-11-19 16:45:32.444758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.112 [2024-11-19 16:45:32.444807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.454749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.454773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.467407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.467447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.478821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.478847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.492912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.492952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.502608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.502634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.517763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.517788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.533736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.533775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.544334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.544384] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.556324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.556365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.566901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.566941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.577889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.577913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.589245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.589271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.605319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.605347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.615440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.615467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 11587.00 IOPS, 90.52 MiB/s [2024-11-19T15:45:32.712Z] [2024-11-19 16:45:32.627687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.627727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.638472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.638496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.652717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.652743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.662291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.662317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.676610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.676635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.685801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.685826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.373 [2024-11-19 16:45:32.697661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.373 [2024-11-19 16:45:32.697686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.714269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.714296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.729699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.729727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.738847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.738870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.750748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.750773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.764552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.764579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.774146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.774176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.788472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.788497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.798194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.798223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.814017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.814059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.831483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.831508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.841884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.841908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.857477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.857519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.867018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.867043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.878354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.878394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.894047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.894083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.910188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.910213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.925978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.926004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.941708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.941734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.634 [2024-11-19 16:45:32.959502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.634 [2024-11-19 16:45:32.959530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:32.969982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:32.970008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:32.985205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:32.985232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:32.994861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:32.994888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.006716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.006741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.017491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.017515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.033760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.033785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.043521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.043545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.055764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.055790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.066781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.066821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.081742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.081767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.090951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.090977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.102685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.102710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.116400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.116427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.125962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.125986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.139810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.139834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.149254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.149281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.160713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.160738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.171128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.171154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.182296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.182323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.196973] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.197014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.206499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.206524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:42.894 [2024-11-19 16:45:33.222370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:42.894 [2024-11-19 16:45:33.222396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.237678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.237706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.247275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.247302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.259253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.259280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.270022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.270046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.283585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.283623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.293563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.293602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.305418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.305443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.320289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.320317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.329889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.329915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.342093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.342132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.357939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.357977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.373476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.373502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.153 [2024-11-19 16:45:33.391182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.153 [2024-11-19 16:45:33.391209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.154 [2024-11-19 16:45:33.400430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.154 [2024-11-19 16:45:33.400454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.154 [2024-11-19 16:45:33.412375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.154 [2024-11-19 16:45:33.412402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.154 [2024-11-19 16:45:33.423294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.154 [2024-11-19 16:45:33.423321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.154 [2024-11-19 16:45:33.434143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.154 [2024-11-19 16:45:33.434169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.154 [2024-11-19 16:45:33.448038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.154 [2024-11-19 16:45:33.448093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.154 [2024-11-19 16:45:33.457850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.154 [2024-11-19 16:45:33.457875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.154 [2024-11-19 16:45:33.471921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.154 [2024-11-19 16:45:33.471946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.154 [2024-11-19 16:45:33.481549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.154 [2024-11-19 16:45:33.481575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.493281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.493308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.509655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.509695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.519495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.519520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.531317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.531343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.542421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.542447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.557319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.557346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.566848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.566872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.578946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.578971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.590213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.590240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.606748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.606773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.616597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.616622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 11616.67 IOPS, 90.76 MiB/s [2024-11-19T15:45:33.751Z] [2024-11-19 16:45:33.628857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.628881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.645042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.645092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.655024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.655064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.667066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.667098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.678152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.678179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.692964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.692990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.702619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.412 [2024-11-19 16:45:33.702645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.412 [2024-11-19 16:45:33.717441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.413 [2024-11-19 16:45:33.717473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.413 [2024-11-19 16:45:33.726748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.413 [2024-11-19 16:45:33.726772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.413 [2024-11-19 16:45:33.738824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.413 [2024-11-19 16:45:33.738849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.751431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.751457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.761233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.761260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.773539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.773564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.786767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.786793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.796693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.796719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.812783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.812807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.822606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.822631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.834442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.834466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.848628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.848653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.858129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.858156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.872140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.872167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.881257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.881298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.892929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.892954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.903727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.903751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.914608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.914647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.929049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.929081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.938444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.938477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.952604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.952630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.962585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.962610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.978049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.978097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:33.993743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:33.993782] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.671 [2024-11-19 16:45:34.003515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.671 [2024-11-19 16:45:34.003541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.015762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.015786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.027396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.027423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.038268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.038294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.053661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.053701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.062678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.062702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.076780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.076804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.086788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.086813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.098684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.098709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.111604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.111630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.121141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.121170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.133111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.133140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.149282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.149310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.159099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.159125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.171621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.171655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.182398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.182437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.197537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.197561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.206714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.206740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.218499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.218524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.232900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.232940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.242411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.242437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.256603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.256628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:43.931 [2024-11-19 16:45:34.266065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:43.931 [2024-11-19 16:45:34.266101] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.280329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.280377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.289753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.289792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.301910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.301936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.318281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.318308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.333514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.333541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.342650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.342676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.356742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.356767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.366681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.366708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.380538] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.380562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.389749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.389775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.401415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.401453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.417486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.417534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.426803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.426829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.438912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.438938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.453032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.453084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.463323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.463374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.474953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.474978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.485262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.485290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.500408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.500448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.510018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.510043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.192 [2024-11-19 16:45:34.525101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.192 [2024-11-19 16:45:34.525145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.539555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.539582] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.549530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.549555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.561243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.561271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.577720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.577748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.587522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.587546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.599775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.599801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.610958] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.610983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.623838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.623863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 11610.50 IOPS, 90.71 MiB/s [2024-11-19T15:45:34.792Z] [2024-11-19 16:45:34.634014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.634039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.648750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.648788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.658776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.658801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.670470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.670508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.684746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.684773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.694569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.694594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.708809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.453 [2024-11-19 16:45:34.708836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.453 [2024-11-19 16:45:34.718563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.454 [2024-11-19 16:45:34.718590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.454 [2024-11-19 16:45:34.732325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.454 [2024-11-19 16:45:34.732365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.454 [2024-11-19 16:45:34.742078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.454 [2024-11-19 16:45:34.742113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.454 [2024-11-19 16:45:34.756881] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.454 [2024-11-19 16:45:34.756907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.454 [2024-11-19 16:45:34.767871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.454 [2024-11-19 16:45:34.767912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.454 [2024-11-19 16:45:34.778960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.454 [2024-11-19 16:45:34.778986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.793241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.793269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.802624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.802664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.816991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.817026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.827190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.827217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.838989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.839015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.850025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.850065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.864425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.864452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.873687] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.873736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.885967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.885991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.902042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.902076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.917779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.917806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.927266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.927294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.939214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.939242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.951714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.951741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.961037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.961086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.977244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.977271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:34.996225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:34.996253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:35.006516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:35.006543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:35.020007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:35.020035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:35.029517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:35.029544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.715 [2024-11-19 16:45:35.041807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.715 [2024-11-19 16:45:35.041832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.976 [2024-11-19 16:45:35.057284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.976 [2024-11-19 16:45:35.057312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.066180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.066208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.078039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.078089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.092613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.092661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.102202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.102230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.113951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.113976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.129805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.129832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.139689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.139715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.151671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.151697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.162575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.162601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.178364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.178389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.193352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.193379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.202988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.203013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.214788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.214812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.229839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.229865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.247626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.247652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.258490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.258515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.273063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.273117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.282624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.282666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:44.977 [2024-11-19 16:45:35.298324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:44.977 [2024-11-19 16:45:35.298350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.312607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.312635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.322413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.322441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.337142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.337181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.346692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.346719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.360907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.360948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.370846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.370872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.383025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.383064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.394205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.394233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.408496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.408536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.418272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.418300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.433215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.433242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.443527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.443569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.455308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.455336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.466378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.466404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.482200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.482228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.495923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.495951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.505673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.505702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.520501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.520544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.530101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.530129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.544618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.544645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.554378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.554405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.237 [2024-11-19 16:45:35.568941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.237 [2024-11-19 16:45:35.568978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.578584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.578610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.593809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.593853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.603781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.603807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.615607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.615634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.626447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.626474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 11604.20 IOPS, 90.66 MiB/s [2024-11-19T15:45:35.837Z] [2024-11-19 16:45:35.635489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.635518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 00:39:45.498 Latency(us) 00:39:45.498 [2024-11-19T15:45:35.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:45.498 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:39:45.498 Nvme1n1 : 5.01 11606.63 90.68 0.00 0.00 11014.98 2912.71 18058.81 00:39:45.498 [2024-11-19T15:45:35.837Z] =================================================================================================================== 00:39:45.498 [2024-11-19T15:45:35.837Z] Total : 11606.63 90.68 0.00 0.00 11014.98 2912.71 18058.81 00:39:45.498 [2024-11-19 16:45:35.643492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.643516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.651494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.651519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.659530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.659580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.667533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.667578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.675527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.675573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.683526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.683571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.691520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.691565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.699526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.699567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.707528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.707573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.715533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.715581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.723529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.723576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.731527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.731575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.739530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.739574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.747527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.747574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.755525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.755570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.763533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.498 [2024-11-19 16:45:35.763581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.498 [2024-11-19 16:45:35.771523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.499 [2024-11-19 16:45:35.771562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.499 [2024-11-19 16:45:35.779516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.499 [2024-11-19 16:45:35.779553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.499 [2024-11-19 16:45:35.787495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.499 [2024-11-19 16:45:35.787520] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.499 [2024-11-19 16:45:35.795524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.499 [2024-11-19 16:45:35.795567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.499 [2024-11-19 16:45:35.803521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.499 [2024-11-19 16:45:35.803562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.499 [2024-11-19 16:45:35.811524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.499 [2024-11-19 16:45:35.811567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.499 [2024-11-19 16:45:35.819489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.499 [2024-11-19 16:45:35.819511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.499 [2024-11-19 16:45:35.827481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.499 [2024-11-19 16:45:35.827502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.759 [2024-11-19 16:45:35.835475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:39:45.759 [2024-11-19 16:45:35.835496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:45.759 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (440456) - No such process 00:39:45.759 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 440456 00:39:45.759 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:39:45.759 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:45.760 delay0 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:45.760 16:45:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:39:45.760 [2024-11-19 16:45:35.918745] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:39:53.885 Initializing NVMe Controllers 00:39:53.885 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:39:53.885 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:39:53.885 Initialization complete. Launching workers. 00:39:53.885 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 212, failed: 26043 00:39:53.885 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26097, failed to submit 158 00:39:53.885 success 26045, unsuccessful 52, failed 0 00:39:53.885 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:39:53.885 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:39:53.885 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:53.885 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:39:53.885 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:53.885 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:39:53.885 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:53.885 16:45:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:53.885 rmmod nvme_tcp 00:39:53.885 rmmod nvme_fabrics 00:39:53.885 rmmod nvme_keyring 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 439177 ']' 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 439177 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 439177 ']' 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 439177 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 439177 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 439177' 00:39:53.885 killing process with pid 439177 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 439177 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 439177 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:53.885 16:45:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:55.268 00:39:55.268 real 0m28.583s 00:39:55.268 user 0m40.267s 00:39:55.268 sys 0m10.131s 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:55.268 ************************************ 00:39:55.268 END TEST nvmf_zcopy 00:39:55.268 ************************************ 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:55.268 ************************************ 00:39:55.268 START TEST nvmf_nmic 00:39:55.268 ************************************ 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:39:55.268 * Looking for test storage... 00:39:55.268 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:39:55.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.268 --rc genhtml_branch_coverage=1 00:39:55.268 --rc genhtml_function_coverage=1 00:39:55.268 --rc genhtml_legend=1 00:39:55.268 --rc geninfo_all_blocks=1 00:39:55.268 --rc geninfo_unexecuted_blocks=1 00:39:55.268 00:39:55.268 ' 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:39:55.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.268 --rc genhtml_branch_coverage=1 00:39:55.268 --rc genhtml_function_coverage=1 00:39:55.268 --rc genhtml_legend=1 00:39:55.268 --rc geninfo_all_blocks=1 00:39:55.268 --rc geninfo_unexecuted_blocks=1 00:39:55.268 00:39:55.268 ' 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:39:55.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.268 --rc genhtml_branch_coverage=1 00:39:55.268 --rc genhtml_function_coverage=1 00:39:55.268 --rc genhtml_legend=1 00:39:55.268 --rc geninfo_all_blocks=1 00:39:55.268 --rc geninfo_unexecuted_blocks=1 00:39:55.268 00:39:55.268 ' 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:39:55.268 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:55.268 --rc genhtml_branch_coverage=1 00:39:55.268 --rc genhtml_function_coverage=1 00:39:55.268 --rc genhtml_legend=1 00:39:55.268 --rc geninfo_all_blocks=1 00:39:55.268 --rc geninfo_unexecuted_blocks=1 00:39:55.268 00:39:55.268 ' 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:55.268 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:39:55.269 16:45:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:39:57.804 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:39:57.804 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:57.804 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:39:57.805 Found net devices under 0000:0a:00.0: cvl_0_0 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:39:57.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:57.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:57.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:39:57.805 00:39:57.805 --- 10.0.0.2 ping statistics --- 00:39:57.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:57.805 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:57.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:57.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:39:57.805 00:39:57.805 --- 10.0.0.1 ping statistics --- 00:39:57.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:57.805 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=443764 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 443764 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 443764 ']' 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:57.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:57.805 16:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.805 [2024-11-19 16:45:47.863381] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:57.805 [2024-11-19 16:45:47.864487] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:39:57.805 [2024-11-19 16:45:47.864554] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:57.805 [2024-11-19 16:45:47.936885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:57.805 [2024-11-19 16:45:47.984524] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:57.805 [2024-11-19 16:45:47.984578] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:57.805 [2024-11-19 16:45:47.984606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:57.805 [2024-11-19 16:45:47.984617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:57.805 [2024-11-19 16:45:47.984629] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:57.805 [2024-11-19 16:45:47.986185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:39:57.806 [2024-11-19 16:45:47.986246] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:39:57.806 [2024-11-19 16:45:47.986314] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:39:57.806 [2024-11-19 16:45:47.986317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:39:57.806 [2024-11-19 16:45:48.069237] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:57.806 [2024-11-19 16:45:48.069432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:57.806 [2024-11-19 16:45:48.069792] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:57.806 [2024-11-19 16:45:48.070345] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:57.806 [2024-11-19 16:45:48.070587] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:57.806 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:57.806 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:39:57.806 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:57.806 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:57.806 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.806 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:57.806 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:57.806 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:57.806 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:57.806 [2024-11-19 16:45:48.122988] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:58.065 Malloc0 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:58.065 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:58.066 [2024-11-19 16:45:48.183195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:39:58.066 test case1: single bdev can't be used in multiple subsystems 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:58.066 [2024-11-19 16:45:48.206916] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:39:58.066 [2024-11-19 16:45:48.206962] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:39:58.066 [2024-11-19 16:45:48.206985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:39:58.066 request: 00:39:58.066 { 00:39:58.066 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:39:58.066 "namespace": { 00:39:58.066 "bdev_name": "Malloc0", 00:39:58.066 "no_auto_visible": false 00:39:58.066 }, 00:39:58.066 "method": "nvmf_subsystem_add_ns", 00:39:58.066 "req_id": 1 00:39:58.066 } 00:39:58.066 Got JSON-RPC error response 00:39:58.066 response: 00:39:58.066 { 00:39:58.066 "code": -32602, 00:39:58.066 "message": "Invalid parameters" 00:39:58.066 } 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:39:58.066 Adding namespace failed - expected result. 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:39:58.066 test case2: host connect to nvmf target in multiple paths 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:39:58.066 [2024-11-19 16:45:48.214995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:39:58.066 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:39:58.324 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:39:58.325 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:39:58.325 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:39:58.325 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:39:58.325 16:45:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:00.858 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:00.858 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:00.858 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:00.858 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:00.858 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:00.858 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:00.858 16:45:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:00.858 [global] 00:40:00.858 thread=1 00:40:00.858 invalidate=1 00:40:00.858 rw=write 00:40:00.858 time_based=1 00:40:00.858 runtime=1 00:40:00.858 ioengine=libaio 00:40:00.858 direct=1 00:40:00.858 bs=4096 00:40:00.858 iodepth=1 00:40:00.858 norandommap=0 00:40:00.858 numjobs=1 00:40:00.858 00:40:00.858 verify_dump=1 00:40:00.858 verify_backlog=512 00:40:00.858 verify_state_save=0 00:40:00.858 do_verify=1 00:40:00.858 verify=crc32c-intel 00:40:00.858 [job0] 00:40:00.858 filename=/dev/nvme0n1 00:40:00.858 Could not set queue depth (nvme0n1) 00:40:00.858 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:00.858 fio-3.35 00:40:00.858 Starting 1 thread 00:40:01.797 00:40:01.797 job0: (groupid=0, jobs=1): err= 0: pid=444247: Tue Nov 19 16:45:51 2024 00:40:01.797 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:01.797 slat (nsec): min=5723, max=56736, avg=10944.78, stdev=5006.00 00:40:01.797 clat (usec): min=210, max=1233, avg=234.24, stdev=34.55 00:40:01.797 lat (usec): min=217, max=1240, avg=245.19, stdev=36.31 00:40:01.797 clat percentiles (usec): 00:40:01.797 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 217], 20.00th=[ 221], 00:40:01.797 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 233], 00:40:01.797 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 262], 00:40:01.797 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 347], 99.95th=[ 1188], 00:40:01.797 | 99.99th=[ 1237] 00:40:01.797 write: IOPS=2494, BW=9978KiB/s (10.2MB/s)(9988KiB/1001msec); 0 zone resets 00:40:01.797 slat (usec): min=7, max=30664, avg=26.78, stdev=613.40 00:40:01.797 clat (usec): min=141, max=386, avg=165.99, stdev=16.90 00:40:01.797 lat (usec): min=149, max=30872, avg=192.77, stdev=614.61 00:40:01.797 clat percentiles (usec): 00:40:01.797 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 151], 00:40:01.797 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 169], 00:40:01.797 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 188], 95.00th=[ 194], 00:40:01.797 | 99.00th=[ 206], 99.50th=[ 210], 99.90th=[ 281], 99.95th=[ 302], 00:40:01.797 | 99.99th=[ 388] 00:40:01.797 bw ( KiB/s): min=10496, max=10496, per=100.00%, avg=10496.00, stdev= 0.00, samples=1 00:40:01.797 iops : min= 2624, max= 2624, avg=2624.00, stdev= 0.00, samples=1 00:40:01.797 lat (usec) : 250=90.85%, 500=9.11% 00:40:01.797 lat (msec) : 2=0.04% 00:40:01.797 cpu : usr=3.70%, sys=8.50%, ctx=4548, majf=0, minf=1 00:40:01.797 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:01.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:01.797 issued rwts: total=2048,2497,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:01.797 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:01.797 00:40:01.797 Run status group 0 (all jobs): 00:40:01.797 READ: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:40:01.797 WRITE: bw=9978KiB/s (10.2MB/s), 9978KiB/s-9978KiB/s (10.2MB/s-10.2MB/s), io=9988KiB (10.2MB), run=1001-1001msec 00:40:01.797 00:40:01.797 Disk stats (read/write): 00:40:01.797 nvme0n1: ios=2039/2048, merge=0/0, ticks=1396/326, in_queue=1722, util=98.80% 00:40:01.797 16:45:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:02.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:02.056 rmmod nvme_tcp 00:40:02.056 rmmod nvme_fabrics 00:40:02.056 rmmod nvme_keyring 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 443764 ']' 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 443764 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 443764 ']' 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 443764 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 443764 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 443764' 00:40:02.056 killing process with pid 443764 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 443764 00:40:02.056 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 443764 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:02.316 16:45:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.223 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:04.223 00:40:04.223 real 0m9.150s 00:40:04.223 user 0m17.080s 00:40:04.223 sys 0m3.442s 00:40:04.223 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:04.223 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:04.223 ************************************ 00:40:04.223 END TEST nvmf_nmic 00:40:04.223 ************************************ 00:40:04.223 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:04.223 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:04.223 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:04.223 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:04.483 ************************************ 00:40:04.483 START TEST nvmf_fio_target 00:40:04.483 ************************************ 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:04.483 * Looking for test storage... 00:40:04.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:04.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.483 --rc genhtml_branch_coverage=1 00:40:04.483 --rc genhtml_function_coverage=1 00:40:04.483 --rc genhtml_legend=1 00:40:04.483 --rc geninfo_all_blocks=1 00:40:04.483 --rc geninfo_unexecuted_blocks=1 00:40:04.483 00:40:04.483 ' 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:04.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.483 --rc genhtml_branch_coverage=1 00:40:04.483 --rc genhtml_function_coverage=1 00:40:04.483 --rc genhtml_legend=1 00:40:04.483 --rc geninfo_all_blocks=1 00:40:04.483 --rc geninfo_unexecuted_blocks=1 00:40:04.483 00:40:04.483 ' 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:04.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.483 --rc genhtml_branch_coverage=1 00:40:04.483 --rc genhtml_function_coverage=1 00:40:04.483 --rc genhtml_legend=1 00:40:04.483 --rc geninfo_all_blocks=1 00:40:04.483 --rc geninfo_unexecuted_blocks=1 00:40:04.483 00:40:04.483 ' 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:04.483 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:04.483 --rc genhtml_branch_coverage=1 00:40:04.483 --rc genhtml_function_coverage=1 00:40:04.483 --rc genhtml_legend=1 00:40:04.483 --rc geninfo_all_blocks=1 00:40:04.483 --rc geninfo_unexecuted_blocks=1 00:40:04.483 00:40:04.483 ' 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.483 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:04.484 16:45:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:07.020 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:07.021 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:07.021 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:07.021 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:07.021 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:07.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:07.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:40:07.021 00:40:07.021 --- 10.0.0.2 ping statistics --- 00:40:07.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.021 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:07.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:07.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:40:07.021 00:40:07.021 --- 10.0.0.1 ping statistics --- 00:40:07.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:07.021 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:07.021 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:07.022 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:07.022 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=446295 00:40:07.022 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:07.022 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 446295 00:40:07.022 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 446295 ']' 00:40:07.022 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:07.022 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:07.022 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:07.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:07.022 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:07.022 16:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:07.022 [2024-11-19 16:45:57.030652] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:07.022 [2024-11-19 16:45:57.031735] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:40:07.022 [2024-11-19 16:45:57.031804] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:07.022 [2024-11-19 16:45:57.105246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:07.022 [2024-11-19 16:45:57.152956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:07.022 [2024-11-19 16:45:57.153008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:07.022 [2024-11-19 16:45:57.153032] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:07.022 [2024-11-19 16:45:57.153043] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:07.022 [2024-11-19 16:45:57.153053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:07.022 [2024-11-19 16:45:57.154612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:07.022 [2024-11-19 16:45:57.154638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:07.022 [2024-11-19 16:45:57.154694] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:07.022 [2024-11-19 16:45:57.154697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:07.022 [2024-11-19 16:45:57.238205] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:07.022 [2024-11-19 16:45:57.238432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:07.022 [2024-11-19 16:45:57.238706] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:07.022 [2024-11-19 16:45:57.239263] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:07.022 [2024-11-19 16:45:57.239524] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:07.022 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:07.022 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:07.022 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:07.022 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:07.022 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:07.022 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:07.022 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:07.280 [2024-11-19 16:45:57.543425] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:07.280 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:07.849 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:07.849 16:45:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:08.109 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:08.109 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:08.367 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:08.367 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:08.626 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:08.626 16:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:08.886 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:09.144 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:09.144 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:09.401 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:09.401 16:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:09.967 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:09.967 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:10.225 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:10.484 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:10.484 16:46:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:10.742 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:10.742 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:11.307 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:11.566 [2024-11-19 16:46:01.667600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:11.566 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:11.825 16:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:12.084 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:12.343 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:12.343 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:12.343 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:12.343 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:12.343 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:12.343 16:46:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:14.251 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:14.252 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:14.252 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:14.252 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:14.252 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:14.252 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:14.252 16:46:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:14.252 [global] 00:40:14.252 thread=1 00:40:14.252 invalidate=1 00:40:14.252 rw=write 00:40:14.252 time_based=1 00:40:14.252 runtime=1 00:40:14.252 ioengine=libaio 00:40:14.252 direct=1 00:40:14.252 bs=4096 00:40:14.252 iodepth=1 00:40:14.252 norandommap=0 00:40:14.252 numjobs=1 00:40:14.252 00:40:14.252 verify_dump=1 00:40:14.252 verify_backlog=512 00:40:14.252 verify_state_save=0 00:40:14.252 do_verify=1 00:40:14.252 verify=crc32c-intel 00:40:14.252 [job0] 00:40:14.252 filename=/dev/nvme0n1 00:40:14.252 [job1] 00:40:14.252 filename=/dev/nvme0n2 00:40:14.252 [job2] 00:40:14.252 filename=/dev/nvme0n3 00:40:14.252 [job3] 00:40:14.252 filename=/dev/nvme0n4 00:40:14.252 Could not set queue depth (nvme0n1) 00:40:14.252 Could not set queue depth (nvme0n2) 00:40:14.252 Could not set queue depth (nvme0n3) 00:40:14.252 Could not set queue depth (nvme0n4) 00:40:14.510 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:14.510 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:14.510 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:14.510 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:14.510 fio-3.35 00:40:14.510 Starting 4 threads 00:40:15.887 00:40:15.887 job0: (groupid=0, jobs=1): err= 0: pid=447328: Tue Nov 19 16:46:06 2024 00:40:15.887 read: IOPS=253, BW=1013KiB/s (1037kB/s)(1032KiB/1019msec) 00:40:15.887 slat (nsec): min=8169, max=62145, avg=19447.50, stdev=6076.51 00:40:15.887 clat (usec): min=253, max=41381, avg=3525.96, stdev=10946.25 00:40:15.887 lat (usec): min=272, max=41389, avg=3545.41, stdev=10945.68 00:40:15.887 clat percentiles (usec): 00:40:15.887 | 1.00th=[ 260], 5.00th=[ 265], 10.00th=[ 269], 20.00th=[ 273], 00:40:15.887 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 285], 00:40:15.887 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 351], 95.00th=[40633], 00:40:15.887 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:15.887 | 99.99th=[41157] 00:40:15.887 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:40:15.887 slat (usec): min=8, max=728, avg=12.41, stdev=42.54 00:40:15.887 clat (usec): min=159, max=246, avg=176.64, stdev= 9.72 00:40:15.887 lat (usec): min=167, max=912, avg=189.05, stdev=44.06 00:40:15.887 clat percentiles (usec): 00:40:15.887 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 165], 20.00th=[ 169], 00:40:15.887 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 178], 00:40:15.887 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 194], 00:40:15.887 | 99.00th=[ 202], 99.50th=[ 206], 99.90th=[ 247], 99.95th=[ 247], 00:40:15.887 | 99.99th=[ 247] 00:40:15.887 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:40:15.887 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:15.887 lat (usec) : 250=66.49%, 500=30.78% 00:40:15.887 lat (msec) : 50=2.73% 00:40:15.887 cpu : usr=0.59%, sys=1.38%, ctx=774, majf=0, minf=1 00:40:15.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:15.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:15.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:15.887 issued rwts: total=258,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:15.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:15.887 job1: (groupid=0, jobs=1): err= 0: pid=447329: Tue Nov 19 16:46:06 2024 00:40:15.887 read: IOPS=21, BW=86.4KiB/s (88.4kB/s)(88.0KiB/1019msec) 00:40:15.887 slat (nsec): min=7329, max=34842, avg=26013.95, stdev=10165.01 00:40:15.887 clat (usec): min=28548, max=41052, avg=40385.43, stdev=2645.37 00:40:15.887 lat (usec): min=28581, max=41067, avg=40411.44, stdev=2643.77 00:40:15.887 clat percentiles (usec): 00:40:15.887 | 1.00th=[28443], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:15.887 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:15.887 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:15.887 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:15.887 | 99.99th=[41157] 00:40:15.887 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:40:15.887 slat (nsec): min=7747, max=41725, avg=8964.71, stdev=1979.89 00:40:15.887 clat (usec): min=150, max=254, avg=241.27, stdev=12.27 00:40:15.887 lat (usec): min=164, max=290, avg=250.23, stdev=11.95 00:40:15.887 clat percentiles (usec): 00:40:15.887 | 1.00th=[ 165], 5.00th=[ 237], 10.00th=[ 241], 20.00th=[ 241], 00:40:15.887 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 243], 60.00th=[ 245], 00:40:15.887 | 70.00th=[ 245], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 247], 00:40:15.887 | 99.00th=[ 249], 99.50th=[ 251], 99.90th=[ 255], 99.95th=[ 255], 00:40:15.887 | 99.99th=[ 255] 00:40:15.887 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:40:15.887 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:15.887 lat (usec) : 250=94.94%, 500=0.94% 00:40:15.887 lat (msec) : 50=4.12% 00:40:15.887 cpu : usr=0.39%, sys=0.59%, ctx=534, majf=0, minf=2 00:40:15.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:15.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:15.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:15.887 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:15.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:15.887 job2: (groupid=0, jobs=1): err= 0: pid=447330: Tue Nov 19 16:46:06 2024 00:40:15.887 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:40:15.887 slat (nsec): min=7419, max=36064, avg=27012.10, stdev=10698.75 00:40:15.887 clat (usec): min=40468, max=42053, avg=41700.08, stdev=495.80 00:40:15.887 lat (usec): min=40476, max=42066, avg=41727.09, stdev=500.80 00:40:15.887 clat percentiles (usec): 00:40:15.887 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:40:15.887 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:15.887 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:15.887 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:15.887 | 99.99th=[42206] 00:40:15.887 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:40:15.887 slat (nsec): min=7802, max=40059, avg=9438.56, stdev=2477.59 00:40:15.887 clat (usec): min=177, max=404, avg=254.65, stdev=31.19 00:40:15.887 lat (usec): min=188, max=413, avg=264.09, stdev=31.05 00:40:15.887 clat percentiles (usec): 00:40:15.887 | 1.00th=[ 188], 5.00th=[ 198], 10.00th=[ 217], 20.00th=[ 239], 00:40:15.887 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:40:15.887 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:40:15.887 | 99.00th=[ 396], 99.50th=[ 400], 99.90th=[ 404], 99.95th=[ 404], 00:40:15.887 | 99.99th=[ 404] 00:40:15.887 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:40:15.887 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:15.887 lat (usec) : 250=37.52%, 500=58.54% 00:40:15.887 lat (msec) : 50=3.94% 00:40:15.887 cpu : usr=0.40%, sys=0.59%, ctx=533, majf=0, minf=2 00:40:15.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:15.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:15.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:15.887 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:15.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:15.887 job3: (groupid=0, jobs=1): err= 0: pid=447331: Tue Nov 19 16:46:06 2024 00:40:15.887 read: IOPS=184, BW=739KiB/s (757kB/s)(748KiB/1012msec) 00:40:15.887 slat (nsec): min=7182, max=54697, avg=13384.36, stdev=8214.23 00:40:15.887 clat (usec): min=220, max=41558, avg=4635.64, stdev=12695.89 00:40:15.887 lat (usec): min=229, max=41582, avg=4649.02, stdev=12698.92 00:40:15.887 clat percentiles (usec): 00:40:15.887 | 1.00th=[ 221], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 227], 00:40:15.887 | 30.00th=[ 229], 40.00th=[ 231], 50.00th=[ 233], 60.00th=[ 235], 00:40:15.887 | 70.00th=[ 243], 80.00th=[ 359], 90.00th=[40633], 95.00th=[41157], 00:40:15.887 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:40:15.887 | 99.99th=[41681] 00:40:15.887 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:40:15.887 slat (nsec): min=7787, max=23422, avg=9728.60, stdev=1862.42 00:40:15.887 clat (usec): min=153, max=448, avg=257.75, stdev=34.33 00:40:15.887 lat (usec): min=161, max=461, avg=267.48, stdev=34.83 00:40:15.887 clat percentiles (usec): 00:40:15.887 | 1.00th=[ 157], 5.00th=[ 212], 10.00th=[ 227], 20.00th=[ 241], 00:40:15.887 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 258], 60.00th=[ 262], 00:40:15.887 | 70.00th=[ 269], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:40:15.887 | 99.00th=[ 404], 99.50th=[ 408], 99.90th=[ 449], 99.95th=[ 449], 00:40:15.887 | 99.99th=[ 449] 00:40:15.887 bw ( KiB/s): min= 4096, max= 4096, per=50.95%, avg=4096.00, stdev= 0.00, samples=1 00:40:15.887 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:15.887 lat (usec) : 250=47.78%, 500=49.07%, 750=0.29% 00:40:15.887 lat (msec) : 50=2.86% 00:40:15.887 cpu : usr=0.59%, sys=0.89%, ctx=701, majf=0, minf=1 00:40:15.887 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:15.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:15.887 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:15.887 issued rwts: total=187,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:15.887 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:15.887 00:40:15.887 Run status group 0 (all jobs): 00:40:15.887 READ: bw=1916KiB/s (1962kB/s), 82.9KiB/s-1013KiB/s (84.9kB/s-1037kB/s), io=1952KiB (1999kB), run=1012-1019msec 00:40:15.887 WRITE: bw=8039KiB/s (8232kB/s), 2010KiB/s-2024KiB/s (2058kB/s-2072kB/s), io=8192KiB (8389kB), run=1012-1019msec 00:40:15.887 00:40:15.887 Disk stats (read/write): 00:40:15.887 nvme0n1: ios=313/512, merge=0/0, ticks=939/84, in_queue=1023, util=97.19% 00:40:15.887 nvme0n2: ios=16/512, merge=0/0, ticks=656/121, in_queue=777, util=83.32% 00:40:15.887 nvme0n3: ios=16/512, merge=0/0, ticks=667/128, in_queue=795, util=87.74% 00:40:15.887 nvme0n4: ios=235/512, merge=0/0, ticks=1180/124, in_queue=1304, util=97.47% 00:40:15.887 16:46:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:15.887 [global] 00:40:15.887 thread=1 00:40:15.887 invalidate=1 00:40:15.887 rw=randwrite 00:40:15.887 time_based=1 00:40:15.887 runtime=1 00:40:15.887 ioengine=libaio 00:40:15.887 direct=1 00:40:15.887 bs=4096 00:40:15.887 iodepth=1 00:40:15.887 norandommap=0 00:40:15.887 numjobs=1 00:40:15.887 00:40:15.887 verify_dump=1 00:40:15.887 verify_backlog=512 00:40:15.887 verify_state_save=0 00:40:15.887 do_verify=1 00:40:15.887 verify=crc32c-intel 00:40:15.887 [job0] 00:40:15.887 filename=/dev/nvme0n1 00:40:15.887 [job1] 00:40:15.887 filename=/dev/nvme0n2 00:40:15.887 [job2] 00:40:15.887 filename=/dev/nvme0n3 00:40:15.887 [job3] 00:40:15.887 filename=/dev/nvme0n4 00:40:15.887 Could not set queue depth (nvme0n1) 00:40:15.887 Could not set queue depth (nvme0n2) 00:40:15.887 Could not set queue depth (nvme0n3) 00:40:15.887 Could not set queue depth (nvme0n4) 00:40:16.144 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.144 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.144 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.144 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:16.144 fio-3.35 00:40:16.144 Starting 4 threads 00:40:17.522 00:40:17.522 job0: (groupid=0, jobs=1): err= 0: pid=447550: Tue Nov 19 16:46:07 2024 00:40:17.522 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:40:17.522 slat (nsec): min=4435, max=46963, avg=6754.70, stdev=3758.24 00:40:17.522 clat (usec): min=212, max=563, avg=259.84, stdev=47.59 00:40:17.522 lat (usec): min=217, max=568, avg=266.59, stdev=49.37 00:40:17.522 clat percentiles (usec): 00:40:17.522 | 1.00th=[ 215], 5.00th=[ 221], 10.00th=[ 223], 20.00th=[ 227], 00:40:17.522 | 30.00th=[ 231], 40.00th=[ 241], 50.00th=[ 247], 60.00th=[ 251], 00:40:17.522 | 70.00th=[ 258], 80.00th=[ 277], 90.00th=[ 330], 95.00th=[ 388], 00:40:17.522 | 99.00th=[ 412], 99.50th=[ 420], 99.90th=[ 474], 99.95th=[ 482], 00:40:17.522 | 99.99th=[ 562] 00:40:17.522 write: IOPS=2452, BW=9810KiB/s (10.0MB/s)(9820KiB/1001msec); 0 zone resets 00:40:17.522 slat (nsec): min=5862, max=43463, avg=7469.27, stdev=2736.78 00:40:17.522 clat (usec): min=148, max=408, avg=173.65, stdev=20.88 00:40:17.522 lat (usec): min=154, max=416, avg=181.12, stdev=21.25 00:40:17.522 clat percentiles (usec): 00:40:17.522 | 1.00th=[ 153], 5.00th=[ 155], 10.00th=[ 155], 20.00th=[ 157], 00:40:17.522 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 165], 60.00th=[ 172], 00:40:17.522 | 70.00th=[ 178], 80.00th=[ 190], 90.00th=[ 206], 95.00th=[ 219], 00:40:17.522 | 99.00th=[ 237], 99.50th=[ 241], 99.90th=[ 251], 99.95th=[ 253], 00:40:17.522 | 99.99th=[ 408] 00:40:17.522 bw ( KiB/s): min= 8528, max= 8528, per=55.24%, avg=8528.00, stdev= 0.00, samples=1 00:40:17.522 iops : min= 2132, max= 2132, avg=2132.00, stdev= 0.00, samples=1 00:40:17.522 lat (usec) : 250=80.55%, 500=19.43%, 750=0.02% 00:40:17.522 cpu : usr=2.20%, sys=2.80%, ctx=4503, majf=0, minf=1 00:40:17.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.522 issued rwts: total=2048,2455,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.522 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.522 job1: (groupid=0, jobs=1): err= 0: pid=447551: Tue Nov 19 16:46:07 2024 00:40:17.522 read: IOPS=21, BW=85.1KiB/s (87.1kB/s)(88.0KiB/1034msec) 00:40:17.522 slat (nsec): min=8693, max=18677, avg=14096.55, stdev=1752.71 00:40:17.522 clat (usec): min=315, max=42048, avg=38047.97, stdev=12175.45 00:40:17.522 lat (usec): min=332, max=42062, avg=38062.07, stdev=12175.24 00:40:17.522 clat percentiles (usec): 00:40:17.522 | 1.00th=[ 318], 5.00th=[ 578], 10.00th=[40633], 20.00th=[41157], 00:40:17.522 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:17.522 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:17.522 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:17.522 | 99.99th=[42206] 00:40:17.522 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:40:17.522 slat (nsec): min=6093, max=48477, avg=13746.08, stdev=6320.30 00:40:17.522 clat (usec): min=155, max=698, avg=364.48, stdev=127.14 00:40:17.522 lat (usec): min=162, max=728, avg=378.22, stdev=130.56 00:40:17.522 clat percentiles (usec): 00:40:17.522 | 1.00th=[ 161], 5.00th=[ 176], 10.00th=[ 184], 20.00th=[ 198], 00:40:17.522 | 30.00th=[ 231], 40.00th=[ 388], 50.00th=[ 408], 60.00th=[ 420], 00:40:17.522 | 70.00th=[ 445], 80.00th=[ 469], 90.00th=[ 506], 95.00th=[ 545], 00:40:17.522 | 99.00th=[ 586], 99.50th=[ 635], 99.90th=[ 701], 99.95th=[ 701], 00:40:17.522 | 99.99th=[ 701] 00:40:17.522 bw ( KiB/s): min= 4096, max= 4096, per=26.53%, avg=4096.00, stdev= 0.00, samples=1 00:40:17.522 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:17.522 lat (usec) : 250=29.78%, 500=55.62%, 750=10.86% 00:40:17.522 lat (msec) : 50=3.75% 00:40:17.522 cpu : usr=0.58%, sys=0.68%, ctx=536, majf=0, minf=1 00:40:17.522 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.522 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.522 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.523 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.523 job2: (groupid=0, jobs=1): err= 0: pid=447552: Tue Nov 19 16:46:07 2024 00:40:17.523 read: IOPS=18, BW=75.5KiB/s (77.3kB/s)(76.0KiB/1007msec) 00:40:17.523 slat (nsec): min=9815, max=13893, avg=11441.11, stdev=1339.39 00:40:17.523 clat (usec): min=40948, max=42018, avg=41911.31, stdev=254.31 00:40:17.523 lat (usec): min=40958, max=42029, avg=41922.75, stdev=254.59 00:40:17.523 clat percentiles (usec): 00:40:17.523 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[42206], 00:40:17.523 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:40:17.523 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:17.523 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:17.523 | 99.99th=[42206] 00:40:17.523 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:40:17.523 slat (nsec): min=9464, max=37198, avg=11111.44, stdev=2099.73 00:40:17.523 clat (usec): min=179, max=1074, avg=394.83, stdev=76.10 00:40:17.523 lat (usec): min=189, max=1084, avg=405.94, stdev=75.97 00:40:17.523 clat percentiles (usec): 00:40:17.523 | 1.00th=[ 225], 5.00th=[ 265], 10.00th=[ 289], 20.00th=[ 326], 00:40:17.523 | 30.00th=[ 355], 40.00th=[ 400], 50.00th=[ 420], 60.00th=[ 424], 00:40:17.523 | 70.00th=[ 437], 80.00th=[ 445], 90.00th=[ 461], 95.00th=[ 478], 00:40:17.523 | 99.00th=[ 510], 99.50th=[ 545], 99.90th=[ 1074], 99.95th=[ 1074], 00:40:17.523 | 99.99th=[ 1074] 00:40:17.523 bw ( KiB/s): min= 4096, max= 4096, per=26.53%, avg=4096.00, stdev= 0.00, samples=1 00:40:17.523 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:17.523 lat (usec) : 250=2.64%, 500=92.09%, 750=1.32%, 1000=0.19% 00:40:17.523 lat (msec) : 2=0.19%, 50=3.58% 00:40:17.523 cpu : usr=0.50%, sys=0.70%, ctx=533, majf=0, minf=1 00:40:17.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.523 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.523 job3: (groupid=0, jobs=1): err= 0: pid=447553: Tue Nov 19 16:46:07 2024 00:40:17.523 read: IOPS=24, BW=98.3KiB/s (101kB/s)(100KiB/1017msec) 00:40:17.523 slat (nsec): min=8358, max=13932, avg=13213.80, stdev=1049.78 00:40:17.523 clat (usec): min=298, max=42209, avg=31918.68, stdev=18087.75 00:40:17.523 lat (usec): min=311, max=42222, avg=31931.89, stdev=18087.61 00:40:17.523 clat percentiles (usec): 00:40:17.523 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 412], 20.00th=[ 437], 00:40:17.523 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:40:17.523 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:40:17.523 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:17.523 | 99.99th=[42206] 00:40:17.523 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:40:17.523 slat (nsec): min=6976, max=38776, avg=14751.05, stdev=4233.35 00:40:17.523 clat (usec): min=200, max=647, avg=408.74, stdev=76.58 00:40:17.523 lat (usec): min=210, max=662, avg=423.49, stdev=76.69 00:40:17.523 clat percentiles (usec): 00:40:17.523 | 1.00th=[ 225], 5.00th=[ 262], 10.00th=[ 310], 20.00th=[ 351], 00:40:17.523 | 30.00th=[ 383], 40.00th=[ 400], 50.00th=[ 416], 60.00th=[ 429], 00:40:17.523 | 70.00th=[ 441], 80.00th=[ 461], 90.00th=[ 494], 95.00th=[ 537], 00:40:17.523 | 99.00th=[ 603], 99.50th=[ 611], 99.90th=[ 652], 99.95th=[ 652], 00:40:17.523 | 99.99th=[ 652] 00:40:17.523 bw ( KiB/s): min= 4096, max= 4096, per=26.53%, avg=4096.00, stdev= 0.00, samples=1 00:40:17.523 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:17.523 lat (usec) : 250=2.98%, 500=85.10%, 750=8.38% 00:40:17.523 lat (msec) : 50=3.54% 00:40:17.523 cpu : usr=0.30%, sys=0.79%, ctx=537, majf=0, minf=1 00:40:17.523 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:17.523 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.523 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:17.523 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:17.523 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:17.523 00:40:17.523 Run status group 0 (all jobs): 00:40:17.523 READ: bw=8178KiB/s (8374kB/s), 75.5KiB/s-8184KiB/s (77.3kB/s-8380kB/s), io=8456KiB (8659kB), run=1001-1034msec 00:40:17.523 WRITE: bw=15.1MiB/s (15.8MB/s), 1981KiB/s-9810KiB/s (2028kB/s-10.0MB/s), io=15.6MiB (16.3MB), run=1001-1034msec 00:40:17.523 00:40:17.523 Disk stats (read/write): 00:40:17.523 nvme0n1: ios=1803/2048, merge=0/0, ticks=474/338, in_queue=812, util=86.87% 00:40:17.523 nvme0n2: ios=41/512, merge=0/0, ticks=1612/183, in_queue=1795, util=99.80% 00:40:17.523 nvme0n3: ios=38/512, merge=0/0, ticks=1574/200, in_queue=1774, util=99.58% 00:40:17.523 nvme0n4: ios=19/512, merge=0/0, ticks=630/203, in_queue=833, util=89.59% 00:40:17.523 16:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:17.523 [global] 00:40:17.523 thread=1 00:40:17.523 invalidate=1 00:40:17.523 rw=write 00:40:17.523 time_based=1 00:40:17.523 runtime=1 00:40:17.523 ioengine=libaio 00:40:17.523 direct=1 00:40:17.523 bs=4096 00:40:17.523 iodepth=128 00:40:17.523 norandommap=0 00:40:17.523 numjobs=1 00:40:17.523 00:40:17.523 verify_dump=1 00:40:17.523 verify_backlog=512 00:40:17.523 verify_state_save=0 00:40:17.523 do_verify=1 00:40:17.523 verify=crc32c-intel 00:40:17.523 [job0] 00:40:17.523 filename=/dev/nvme0n1 00:40:17.523 [job1] 00:40:17.523 filename=/dev/nvme0n2 00:40:17.523 [job2] 00:40:17.523 filename=/dev/nvme0n3 00:40:17.523 [job3] 00:40:17.523 filename=/dev/nvme0n4 00:40:17.523 Could not set queue depth (nvme0n1) 00:40:17.523 Could not set queue depth (nvme0n2) 00:40:17.523 Could not set queue depth (nvme0n3) 00:40:17.523 Could not set queue depth (nvme0n4) 00:40:17.523 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:17.523 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:17.523 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:17.523 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:17.523 fio-3.35 00:40:17.523 Starting 4 threads 00:40:18.900 00:40:18.900 job0: (groupid=0, jobs=1): err= 0: pid=447894: Tue Nov 19 16:46:08 2024 00:40:18.900 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:40:18.900 slat (usec): min=2, max=21532, avg=100.82, stdev=683.91 00:40:18.900 clat (usec): min=7337, max=44557, avg=12923.74, stdev=4055.02 00:40:18.900 lat (usec): min=7340, max=44572, avg=13024.56, stdev=4114.97 00:40:18.900 clat percentiles (usec): 00:40:18.900 | 1.00th=[ 8455], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[10814], 00:40:18.900 | 30.00th=[11207], 40.00th=[11469], 50.00th=[11863], 60.00th=[12125], 00:40:18.900 | 70.00th=[12518], 80.00th=[13566], 90.00th=[15270], 95.00th=[23200], 00:40:18.900 | 99.00th=[30540], 99.50th=[33424], 99.90th=[36963], 99.95th=[36963], 00:40:18.900 | 99.99th=[44303] 00:40:18.900 write: IOPS=4928, BW=19.2MiB/s (20.2MB/s)(19.3MiB/1003msec); 0 zone resets 00:40:18.900 slat (usec): min=3, max=20786, avg=95.63, stdev=702.54 00:40:18.900 clat (usec): min=387, max=42985, avg=13608.45, stdev=5741.58 00:40:18.901 lat (usec): min=393, max=43059, avg=13704.09, stdev=5760.53 00:40:18.901 clat percentiles (usec): 00:40:18.901 | 1.00th=[ 5735], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[10552], 00:40:18.901 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:40:18.901 | 70.00th=[12780], 80.00th=[14877], 90.00th=[21627], 95.00th=[26084], 00:40:18.901 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:40:18.901 | 99.99th=[42730] 00:40:18.901 bw ( KiB/s): min=18040, max=20480, per=29.56%, avg=19260.00, stdev=1725.34, samples=2 00:40:18.901 iops : min= 4510, max= 5120, avg=4815.00, stdev=431.34, samples=2 00:40:18.901 lat (usec) : 500=0.03% 00:40:18.901 lat (msec) : 2=0.01%, 4=0.07%, 10=10.15%, 20=80.17%, 50=9.57% 00:40:18.901 cpu : usr=6.49%, sys=7.29%, ctx=398, majf=0, minf=1 00:40:18.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:18.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:18.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:18.901 issued rwts: total=4608,4943,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:18.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:18.901 job1: (groupid=0, jobs=1): err= 0: pid=447895: Tue Nov 19 16:46:08 2024 00:40:18.901 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:40:18.901 slat (usec): min=3, max=14280, avg=118.29, stdev=823.52 00:40:18.901 clat (usec): min=5417, max=63312, avg=15887.30, stdev=7059.02 00:40:18.901 lat (usec): min=5434, max=63325, avg=16005.59, stdev=7099.16 00:40:18.901 clat percentiles (usec): 00:40:18.901 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[11076], 00:40:18.901 | 30.00th=[11600], 40.00th=[12649], 50.00th=[14222], 60.00th=[15795], 00:40:18.901 | 70.00th=[16909], 80.00th=[19268], 90.00th=[23200], 95.00th=[25297], 00:40:18.901 | 99.00th=[52167], 99.50th=[59507], 99.90th=[62129], 99.95th=[63177], 00:40:18.901 | 99.99th=[63177] 00:40:18.901 write: IOPS=3415, BW=13.3MiB/s (14.0MB/s)(13.5MiB/1009msec); 0 zone resets 00:40:18.901 slat (usec): min=4, max=18044, avg=168.17, stdev=937.87 00:40:18.901 clat (usec): min=1490, max=93611, avg=22956.25, stdev=17449.23 00:40:18.901 lat (usec): min=1502, max=93620, avg=23124.41, stdev=17560.05 00:40:18.901 clat percentiles (usec): 00:40:18.901 | 1.00th=[ 5735], 5.00th=[ 9503], 10.00th=[10290], 20.00th=[11469], 00:40:18.901 | 30.00th=[12125], 40.00th=[12387], 50.00th=[16909], 60.00th=[21103], 00:40:18.901 | 70.00th=[24249], 80.00th=[25560], 90.00th=[53740], 95.00th=[63177], 00:40:18.901 | 99.00th=[83362], 99.50th=[91751], 99.90th=[93848], 99.95th=[93848], 00:40:18.901 | 99.99th=[93848] 00:40:18.901 bw ( KiB/s): min=12536, max=14016, per=20.37%, avg=13276.00, stdev=1046.52, samples=2 00:40:18.901 iops : min= 3134, max= 3504, avg=3319.00, stdev=261.63, samples=2 00:40:18.901 lat (msec) : 2=0.05%, 4=0.09%, 10=6.63%, 20=61.98%, 50=24.73% 00:40:18.901 lat (msec) : 100=6.52% 00:40:18.901 cpu : usr=3.37%, sys=7.44%, ctx=356, majf=0, minf=1 00:40:18.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:40:18.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:18.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:18.901 issued rwts: total=3072,3446,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:18.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:18.901 job2: (groupid=0, jobs=1): err= 0: pid=447896: Tue Nov 19 16:46:08 2024 00:40:18.901 read: IOPS=3315, BW=12.9MiB/s (13.6MB/s)(13.0MiB/1003msec) 00:40:18.901 slat (usec): min=2, max=18253, avg=108.17, stdev=933.62 00:40:18.901 clat (usec): min=702, max=55561, avg=16016.49, stdev=8294.22 00:40:18.901 lat (usec): min=716, max=55573, avg=16124.66, stdev=8372.50 00:40:18.901 clat percentiles (usec): 00:40:18.901 | 1.00th=[ 2212], 5.00th=[ 6390], 10.00th=[ 9110], 20.00th=[11207], 00:40:18.901 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13698], 60.00th=[15139], 00:40:18.901 | 70.00th=[15664], 80.00th=[19006], 90.00th=[27657], 95.00th=[37487], 00:40:18.901 | 99.00th=[44303], 99.50th=[45351], 99.90th=[46400], 99.95th=[47449], 00:40:18.901 | 99.99th=[55313] 00:40:18.901 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:40:18.901 slat (usec): min=3, max=30629, avg=149.97, stdev=1260.57 00:40:18.901 clat (usec): min=1250, max=74573, avg=20611.85, stdev=10873.93 00:40:18.901 lat (usec): min=3557, max=74590, avg=20761.82, stdev=10981.56 00:40:18.901 clat percentiles (usec): 00:40:18.901 | 1.00th=[ 5145], 5.00th=[ 8160], 10.00th=[ 9634], 20.00th=[12387], 00:40:18.901 | 30.00th=[14091], 40.00th=[14877], 50.00th=[16188], 60.00th=[21365], 00:40:18.901 | 70.00th=[24249], 80.00th=[26870], 90.00th=[38536], 95.00th=[43779], 00:40:18.901 | 99.00th=[51643], 99.50th=[51643], 99.90th=[54789], 99.95th=[65799], 00:40:18.901 | 99.99th=[74974] 00:40:18.901 bw ( KiB/s): min=12656, max=16016, per=22.00%, avg=14336.00, stdev=2375.88, samples=2 00:40:18.901 iops : min= 3164, max= 4004, avg=3584.00, stdev=593.97, samples=2 00:40:18.901 lat (usec) : 750=0.01% 00:40:18.901 lat (msec) : 2=0.39%, 4=0.87%, 10=11.36%, 20=56.72%, 50=29.66% 00:40:18.901 lat (msec) : 100=0.98% 00:40:18.901 cpu : usr=2.10%, sys=5.19%, ctx=255, majf=0, minf=1 00:40:18.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:40:18.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:18.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:18.901 issued rwts: total=3325,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:18.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:18.901 job3: (groupid=0, jobs=1): err= 0: pid=447897: Tue Nov 19 16:46:08 2024 00:40:18.901 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:40:18.901 slat (usec): min=2, max=18545, avg=114.47, stdev=746.91 00:40:18.901 clat (usec): min=7205, max=37692, avg=14879.25, stdev=5128.07 00:40:18.901 lat (usec): min=7216, max=49119, avg=14993.73, stdev=5168.54 00:40:18.901 clat percentiles (usec): 00:40:18.901 | 1.00th=[ 7308], 5.00th=[10028], 10.00th=[10814], 20.00th=[11863], 00:40:18.901 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[14615], 00:40:18.901 | 70.00th=[15401], 80.00th=[15926], 90.00th=[20317], 95.00th=[25035], 00:40:18.901 | 99.00th=[35914], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:40:18.901 | 99.99th=[37487] 00:40:18.901 write: IOPS=4446, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1004msec); 0 zone resets 00:40:18.901 slat (usec): min=2, max=21421, avg=110.35, stdev=786.77 00:40:18.901 clat (usec): min=661, max=42627, avg=14879.88, stdev=6703.97 00:40:18.901 lat (usec): min=679, max=42642, avg=14990.23, stdev=6758.65 00:40:18.901 clat percentiles (usec): 00:40:18.901 | 1.00th=[ 6456], 5.00th=[ 7635], 10.00th=[ 9896], 20.00th=[11469], 00:40:18.901 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12780], 60.00th=[13698], 00:40:18.901 | 70.00th=[14746], 80.00th=[15926], 90.00th=[25035], 95.00th=[31327], 00:40:18.901 | 99.00th=[39584], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:40:18.901 | 99.99th=[42730] 00:40:18.901 bw ( KiB/s): min=16128, max=18560, per=26.62%, avg=17344.00, stdev=1719.68, samples=2 00:40:18.901 iops : min= 4032, max= 4640, avg=4336.00, stdev=429.92, samples=2 00:40:18.901 lat (usec) : 750=0.02% 00:40:18.901 lat (msec) : 4=0.11%, 10=7.44%, 20=80.13%, 50=12.30% 00:40:18.901 cpu : usr=4.39%, sys=6.78%, ctx=372, majf=0, minf=2 00:40:18.901 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:40:18.901 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:18.901 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:18.901 issued rwts: total=4096,4464,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:18.901 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:18.901 00:40:18.901 Run status group 0 (all jobs): 00:40:18.901 READ: bw=58.5MiB/s (61.3MB/s), 11.9MiB/s-17.9MiB/s (12.5MB/s-18.8MB/s), io=59.0MiB (61.9MB), run=1003-1009msec 00:40:18.901 WRITE: bw=63.6MiB/s (66.7MB/s), 13.3MiB/s-19.2MiB/s (14.0MB/s-20.2MB/s), io=64.2MiB (67.3MB), run=1003-1009msec 00:40:18.901 00:40:18.901 Disk stats (read/write): 00:40:18.901 nvme0n1: ios=4133/4371, merge=0/0, ticks=19077/23520, in_queue=42597, util=97.70% 00:40:18.901 nvme0n2: ios=2581/2807, merge=0/0, ticks=28101/48649, in_queue=76750, util=96.85% 00:40:18.901 nvme0n3: ios=3118/3119, merge=0/0, ticks=40848/47279, in_queue=88127, util=98.23% 00:40:18.901 nvme0n4: ios=3492/3584, merge=0/0, ticks=31921/32800, in_queue=64721, util=90.66% 00:40:18.901 16:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:18.901 [global] 00:40:18.901 thread=1 00:40:18.901 invalidate=1 00:40:18.901 rw=randwrite 00:40:18.901 time_based=1 00:40:18.901 runtime=1 00:40:18.901 ioengine=libaio 00:40:18.901 direct=1 00:40:18.901 bs=4096 00:40:18.901 iodepth=128 00:40:18.901 norandommap=0 00:40:18.901 numjobs=1 00:40:18.901 00:40:18.901 verify_dump=1 00:40:18.901 verify_backlog=512 00:40:18.901 verify_state_save=0 00:40:18.901 do_verify=1 00:40:18.901 verify=crc32c-intel 00:40:18.901 [job0] 00:40:18.901 filename=/dev/nvme0n1 00:40:18.901 [job1] 00:40:18.901 filename=/dev/nvme0n2 00:40:18.901 [job2] 00:40:18.901 filename=/dev/nvme0n3 00:40:18.901 [job3] 00:40:18.901 filename=/dev/nvme0n4 00:40:18.901 Could not set queue depth (nvme0n1) 00:40:18.901 Could not set queue depth (nvme0n2) 00:40:18.901 Could not set queue depth (nvme0n3) 00:40:18.901 Could not set queue depth (nvme0n4) 00:40:18.901 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:18.901 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:18.901 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:18.901 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:18.901 fio-3.35 00:40:18.901 Starting 4 threads 00:40:20.278 00:40:20.278 job0: (groupid=0, jobs=1): err= 0: pid=448115: Tue Nov 19 16:46:10 2024 00:40:20.278 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:40:20.278 slat (nsec): min=1979, max=46967k, avg=107929.54, stdev=936508.53 00:40:20.278 clat (usec): min=3437, max=61165, avg=14397.81, stdev=8277.21 00:40:20.278 lat (usec): min=3503, max=61170, avg=14505.74, stdev=8314.83 00:40:20.278 clat percentiles (usec): 00:40:20.278 | 1.00th=[ 5735], 5.00th=[ 7570], 10.00th=[ 9634], 20.00th=[10159], 00:40:20.278 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:40:20.278 | 70.00th=[15008], 80.00th=[17695], 90.00th=[19792], 95.00th=[23725], 00:40:20.278 | 99.00th=[56886], 99.50th=[57410], 99.90th=[61080], 99.95th=[61080], 00:40:20.278 | 99.99th=[61080] 00:40:20.278 write: IOPS=4720, BW=18.4MiB/s (19.3MB/s)(18.6MiB/1009msec); 0 zone resets 00:40:20.278 slat (usec): min=3, max=10977, avg=99.13, stdev=577.07 00:40:20.278 clat (usec): min=1247, max=36435, avg=12963.63, stdev=3837.83 00:40:20.278 lat (usec): min=1256, max=36440, avg=13062.76, stdev=3874.97 00:40:20.278 clat percentiles (usec): 00:40:20.278 | 1.00th=[ 3359], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10814], 00:40:20.278 | 30.00th=[11207], 40.00th=[11600], 50.00th=[12256], 60.00th=[12387], 00:40:20.278 | 70.00th=[13304], 80.00th=[15139], 90.00th=[17957], 95.00th=[19530], 00:40:20.278 | 99.00th=[27919], 99.50th=[30540], 99.90th=[31327], 99.95th=[31327], 00:40:20.278 | 99.99th=[36439] 00:40:20.278 bw ( KiB/s): min=16600, max=20480, per=26.78%, avg=18540.00, stdev=2743.57, samples=2 00:40:20.278 iops : min= 4150, max= 5120, avg=4635.00, stdev=685.89, samples=2 00:40:20.278 lat (msec) : 2=0.31%, 4=0.32%, 10=12.17%, 20=80.06%, 50=5.79% 00:40:20.278 lat (msec) : 100=1.36% 00:40:20.278 cpu : usr=3.67%, sys=5.36%, ctx=451, majf=0, minf=2 00:40:20.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:20.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.279 issued rwts: total=4608,4763,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.279 job1: (groupid=0, jobs=1): err= 0: pid=448116: Tue Nov 19 16:46:10 2024 00:40:20.279 read: IOPS=4790, BW=18.7MiB/s (19.6MB/s)(18.8MiB/1003msec) 00:40:20.279 slat (usec): min=2, max=14755, avg=102.43, stdev=621.50 00:40:20.279 clat (usec): min=717, max=38313, avg=12766.86, stdev=3937.75 00:40:20.279 lat (usec): min=2499, max=38326, avg=12869.29, stdev=3986.45 00:40:20.279 clat percentiles (usec): 00:40:20.279 | 1.00th=[ 5669], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10552], 00:40:20.279 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11469], 60.00th=[11863], 00:40:20.279 | 70.00th=[12649], 80.00th=[13960], 90.00th=[19006], 95.00th=[21890], 00:40:20.279 | 99.00th=[26608], 99.50th=[29492], 99.90th=[30016], 99.95th=[34341], 00:40:20.279 | 99.99th=[38536] 00:40:20.279 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:40:20.279 slat (usec): min=3, max=9494, avg=90.58, stdev=449.02 00:40:20.279 clat (usec): min=4687, max=41505, avg=12712.69, stdev=4557.04 00:40:20.279 lat (usec): min=4701, max=41516, avg=12803.26, stdev=4578.59 00:40:20.279 clat percentiles (usec): 00:40:20.279 | 1.00th=[ 7504], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[10683], 00:40:20.279 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11207], 60.00th=[11863], 00:40:20.279 | 70.00th=[12518], 80.00th=[13960], 90.00th=[17433], 95.00th=[18482], 00:40:20.279 | 99.00th=[37487], 99.50th=[38011], 99.90th=[41157], 99.95th=[41681], 00:40:20.279 | 99.99th=[41681] 00:40:20.279 bw ( KiB/s): min=20480, max=20480, per=29.58%, avg=20480.00, stdev= 0.00, samples=2 00:40:20.279 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:40:20.279 lat (usec) : 750=0.01% 00:40:20.279 lat (msec) : 4=0.28%, 10=9.75%, 20=84.11%, 50=5.84% 00:40:20.279 cpu : usr=5.19%, sys=8.78%, ctx=543, majf=0, minf=1 00:40:20.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:20.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.279 issued rwts: total=4805,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.279 job2: (groupid=0, jobs=1): err= 0: pid=448117: Tue Nov 19 16:46:10 2024 00:40:20.279 read: IOPS=3959, BW=15.5MiB/s (16.2MB/s)(15.6MiB/1010msec) 00:40:20.279 slat (usec): min=2, max=14511, avg=117.71, stdev=772.26 00:40:20.279 clat (usec): min=1632, max=35388, avg=15072.17, stdev=4695.87 00:40:20.279 lat (usec): min=4367, max=35401, avg=15189.88, stdev=4734.24 00:40:20.279 clat percentiles (usec): 00:40:20.279 | 1.00th=[ 7046], 5.00th=[ 9372], 10.00th=[10683], 20.00th=[12125], 00:40:20.279 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13698], 60.00th=[14353], 00:40:20.279 | 70.00th=[16450], 80.00th=[18220], 90.00th=[22152], 95.00th=[23987], 00:40:20.279 | 99.00th=[29492], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:40:20.279 | 99.99th=[35390] 00:40:20.279 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:40:20.279 slat (usec): min=3, max=19559, avg=120.97, stdev=825.49 00:40:20.279 clat (usec): min=373, max=79762, avg=16321.15, stdev=10858.60 00:40:20.279 lat (usec): min=622, max=80515, avg=16442.12, stdev=10933.58 00:40:20.279 clat percentiles (usec): 00:40:20.279 | 1.00th=[ 4686], 5.00th=[ 8586], 10.00th=[10814], 20.00th=[11863], 00:40:20.279 | 30.00th=[12256], 40.00th=[12518], 50.00th=[13042], 60.00th=[13435], 00:40:20.279 | 70.00th=[15008], 80.00th=[18482], 90.00th=[25035], 95.00th=[33817], 00:40:20.279 | 99.00th=[73925], 99.50th=[79168], 99.90th=[80217], 99.95th=[80217], 00:40:20.279 | 99.99th=[80217] 00:40:20.279 bw ( KiB/s): min=15344, max=17424, per=23.66%, avg=16384.00, stdev=1470.78, samples=2 00:40:20.279 iops : min= 3836, max= 4356, avg=4096.00, stdev=367.70, samples=2 00:40:20.279 lat (usec) : 500=0.01%, 750=0.05%, 1000=0.06% 00:40:20.279 lat (msec) : 2=0.27%, 4=0.07%, 10=6.82%, 20=75.76%, 50=15.38% 00:40:20.279 lat (msec) : 100=1.57% 00:40:20.279 cpu : usr=3.47%, sys=7.83%, ctx=312, majf=0, minf=1 00:40:20.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:20.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.279 issued rwts: total=3999,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.279 job3: (groupid=0, jobs=1): err= 0: pid=448118: Tue Nov 19 16:46:10 2024 00:40:20.279 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:40:20.279 slat (usec): min=2, max=13406, avg=134.17, stdev=868.85 00:40:20.279 clat (usec): min=4176, max=40707, avg=17484.95, stdev=7302.32 00:40:20.279 lat (usec): min=4180, max=40725, avg=17619.12, stdev=7347.02 00:40:20.279 clat percentiles (usec): 00:40:20.279 | 1.00th=[ 7308], 5.00th=[10421], 10.00th=[11731], 20.00th=[12649], 00:40:20.279 | 30.00th=[13173], 40.00th=[14222], 50.00th=[14877], 60.00th=[16057], 00:40:20.279 | 70.00th=[17695], 80.00th=[21890], 90.00th=[28705], 95.00th=[35390], 00:40:20.279 | 99.00th=[40109], 99.50th=[40633], 99.90th=[40633], 99.95th=[40633], 00:40:20.279 | 99.99th=[40633] 00:40:20.279 write: IOPS=3499, BW=13.7MiB/s (14.3MB/s)(13.7MiB/1001msec); 0 zone resets 00:40:20.279 slat (usec): min=3, max=23933, avg=155.36, stdev=1003.53 00:40:20.279 clat (usec): min=461, max=117219, avg=20702.29, stdev=17501.99 00:40:20.279 lat (msec): min=2, max=117, avg=20.86, stdev=17.60 00:40:20.279 clat percentiles (msec): 00:40:20.279 | 1.00th=[ 4], 5.00th=[ 7], 10.00th=[ 11], 20.00th=[ 12], 00:40:20.279 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 16], 60.00th=[ 20], 00:40:20.279 | 70.00th=[ 23], 80.00th=[ 26], 90.00th=[ 30], 95.00th=[ 47], 00:40:20.279 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 117], 99.95th=[ 117], 00:40:20.279 | 99.99th=[ 117] 00:40:20.279 bw ( KiB/s): min=12800, max=12800, per=18.49%, avg=12800.00, stdev= 0.00, samples=1 00:40:20.279 iops : min= 3200, max= 3200, avg=3200.00, stdev= 0.00, samples=1 00:40:20.279 lat (usec) : 500=0.02% 00:40:20.279 lat (msec) : 4=1.11%, 10=6.36%, 20=62.48%, 50=27.62%, 100=1.46% 00:40:20.279 lat (msec) : 250=0.96% 00:40:20.279 cpu : usr=4.10%, sys=5.70%, ctx=298, majf=0, minf=1 00:40:20.279 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:40:20.279 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:20.279 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:20.279 issued rwts: total=3072,3503,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:20.279 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:20.279 00:40:20.279 Run status group 0 (all jobs): 00:40:20.279 READ: bw=63.8MiB/s (66.8MB/s), 12.0MiB/s-18.7MiB/s (12.6MB/s-19.6MB/s), io=64.4MiB (67.5MB), run=1001-1010msec 00:40:20.279 WRITE: bw=67.6MiB/s (70.9MB/s), 13.7MiB/s-19.9MiB/s (14.3MB/s-20.9MB/s), io=68.3MiB (71.6MB), run=1001-1010msec 00:40:20.279 00:40:20.279 Disk stats (read/write): 00:40:20.279 nvme0n1: ios=4146/4135, merge=0/0, ticks=20296/22667, in_queue=42963, util=86.87% 00:40:20.279 nvme0n2: ios=4088/4096, merge=0/0, ticks=18072/17489, in_queue=35561, util=90.46% 00:40:20.279 nvme0n3: ios=3186/3584, merge=0/0, ticks=25385/42308, in_queue=67693, util=99.69% 00:40:20.279 nvme0n4: ios=2761/3072, merge=0/0, ticks=26496/43848, in_queue=70344, util=95.69% 00:40:20.279 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:20.279 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=448245 00:40:20.279 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:20.279 16:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:20.279 [global] 00:40:20.279 thread=1 00:40:20.279 invalidate=1 00:40:20.279 rw=read 00:40:20.279 time_based=1 00:40:20.279 runtime=10 00:40:20.279 ioengine=libaio 00:40:20.279 direct=1 00:40:20.279 bs=4096 00:40:20.279 iodepth=1 00:40:20.279 norandommap=1 00:40:20.279 numjobs=1 00:40:20.279 00:40:20.279 [job0] 00:40:20.279 filename=/dev/nvme0n1 00:40:20.279 [job1] 00:40:20.279 filename=/dev/nvme0n2 00:40:20.279 [job2] 00:40:20.279 filename=/dev/nvme0n3 00:40:20.279 [job3] 00:40:20.279 filename=/dev/nvme0n4 00:40:20.279 Could not set queue depth (nvme0n1) 00:40:20.279 Could not set queue depth (nvme0n2) 00:40:20.279 Could not set queue depth (nvme0n3) 00:40:20.279 Could not set queue depth (nvme0n4) 00:40:20.538 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:20.538 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:20.538 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:20.538 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:20.538 fio-3.35 00:40:20.538 Starting 4 threads 00:40:23.824 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:23.824 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:23.824 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=290816, buflen=4096 00:40:23.824 fio: pid=448348, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:23.824 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:23.824 16:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:23.824 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=339968, buflen=4096 00:40:23.824 fio: pid=448347, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:24.083 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=58994688, buflen=4096 00:40:24.083 fio: pid=448342, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:24.083 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:24.083 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:24.342 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:24.342 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:24.342 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=4284416, buflen=4096 00:40:24.342 fio: pid=448346, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:24.342 00:40:24.342 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=448342: Tue Nov 19 16:46:14 2024 00:40:24.342 read: IOPS=4160, BW=16.3MiB/s (17.0MB/s)(56.3MiB/3462msec) 00:40:24.342 slat (usec): min=4, max=28279, avg=11.74, stdev=271.03 00:40:24.342 clat (usec): min=190, max=42237, avg=225.29, stdev=352.43 00:40:24.342 lat (usec): min=198, max=42242, avg=237.03, stdev=445.33 00:40:24.342 clat percentiles (usec): 00:40:24.342 | 1.00th=[ 200], 5.00th=[ 202], 10.00th=[ 204], 20.00th=[ 206], 00:40:24.342 | 30.00th=[ 210], 40.00th=[ 212], 50.00th=[ 215], 60.00th=[ 219], 00:40:24.342 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 273], 00:40:24.342 | 99.00th=[ 379], 99.50th=[ 416], 99.90th=[ 510], 99.95th=[ 635], 00:40:24.342 | 99.99th=[ 3392] 00:40:24.342 bw ( KiB/s): min=15864, max=18136, per=100.00%, avg=16756.00, stdev=942.72, samples=6 00:40:24.342 iops : min= 3966, max= 4534, avg=4189.00, stdev=235.68, samples=6 00:40:24.342 lat (usec) : 250=91.88%, 500=7.99%, 750=0.10%, 1000=0.01% 00:40:24.342 lat (msec) : 4=0.01%, 50=0.01% 00:40:24.342 cpu : usr=1.10%, sys=3.99%, ctx=14409, majf=0, minf=2 00:40:24.342 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:24.342 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.342 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.342 issued rwts: total=14404,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:24.342 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:24.342 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=448346: Tue Nov 19 16:46:14 2024 00:40:24.342 read: IOPS=277, BW=1110KiB/s (1136kB/s)(4184KiB/3771msec) 00:40:24.342 slat (usec): min=4, max=9913, avg=35.27, stdev=471.65 00:40:24.342 clat (usec): min=195, max=42394, avg=3544.44, stdev=11191.37 00:40:24.342 lat (usec): min=203, max=52054, avg=3579.68, stdev=11230.28 00:40:24.342 clat percentiles (usec): 00:40:24.342 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 233], 00:40:24.342 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 260], 00:40:24.342 | 70.00th=[ 269], 80.00th=[ 297], 90.00th=[ 412], 95.00th=[41681], 00:40:24.342 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:24.342 | 99.99th=[42206] 00:40:24.342 bw ( KiB/s): min= 264, max= 3833, per=6.53%, avg=1080.14, stdev=1365.32, samples=7 00:40:24.342 iops : min= 66, max= 958, avg=270.00, stdev=341.24, samples=7 00:40:24.342 lat (usec) : 250=47.76%, 500=44.13%, 750=0.10% 00:40:24.342 lat (msec) : 50=7.93% 00:40:24.343 cpu : usr=0.05%, sys=0.50%, ctx=1052, majf=0, minf=1 00:40:24.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:24.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.343 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.343 issued rwts: total=1047,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:24.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:24.343 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=448347: Tue Nov 19 16:46:14 2024 00:40:24.343 read: IOPS=26, BW=104KiB/s (106kB/s)(332KiB/3204msec) 00:40:24.343 slat (usec): min=8, max=6896, avg=99.58, stdev=750.51 00:40:24.343 clat (usec): min=303, max=41338, avg=38217.90, stdev=10032.48 00:40:24.343 lat (usec): min=322, max=47983, avg=38318.47, stdev=10085.05 00:40:24.343 clat percentiles (usec): 00:40:24.343 | 1.00th=[ 306], 5.00th=[ 766], 10.00th=[40633], 20.00th=[41157], 00:40:24.343 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:24.343 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:24.343 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:24.343 | 99.99th=[41157] 00:40:24.343 bw ( KiB/s): min= 96, max= 128, per=0.63%, avg=104.00, stdev=12.39, samples=6 00:40:24.343 iops : min= 24, max= 32, avg=26.00, stdev= 3.10, samples=6 00:40:24.343 lat (usec) : 500=2.38%, 750=2.38%, 1000=1.19% 00:40:24.343 lat (msec) : 20=1.19%, 50=91.67% 00:40:24.343 cpu : usr=0.09%, sys=0.00%, ctx=86, majf=0, minf=2 00:40:24.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:24.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.343 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.343 issued rwts: total=84,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:24.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:24.343 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=448348: Tue Nov 19 16:46:14 2024 00:40:24.343 read: IOPS=24, BW=97.9KiB/s (100kB/s)(284KiB/2900msec) 00:40:24.343 slat (nsec): min=9865, max=38514, avg=18296.81, stdev=6941.94 00:40:24.343 clat (usec): min=489, max=41985, avg=40494.82, stdev=4823.68 00:40:24.343 lat (usec): min=512, max=42001, avg=40513.17, stdev=4823.15 00:40:24.343 clat percentiles (usec): 00:40:24.343 | 1.00th=[ 490], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:24.343 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:24.343 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:40:24.343 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:24.343 | 99.99th=[42206] 00:40:24.343 bw ( KiB/s): min= 96, max= 104, per=0.59%, avg=97.60, stdev= 3.58, samples=5 00:40:24.343 iops : min= 24, max= 26, avg=24.40, stdev= 0.89, samples=5 00:40:24.343 lat (usec) : 500=1.39% 00:40:24.343 lat (msec) : 50=97.22% 00:40:24.343 cpu : usr=0.10%, sys=0.00%, ctx=73, majf=0, minf=1 00:40:24.343 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:24.343 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.343 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:24.343 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:24.343 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:24.343 00:40:24.343 Run status group 0 (all jobs): 00:40:24.343 READ: bw=16.2MiB/s (16.9MB/s), 97.9KiB/s-16.3MiB/s (100kB/s-17.0MB/s), io=60.9MiB (63.9MB), run=2900-3771msec 00:40:24.343 00:40:24.343 Disk stats (read/write): 00:40:24.343 nvme0n1: ios=14208/0, merge=0/0, ticks=4134/0, in_queue=4134, util=98.46% 00:40:24.343 nvme0n2: ios=1085/0, merge=0/0, ticks=4655/0, in_queue=4655, util=98.82% 00:40:24.343 nvme0n3: ios=133/0, merge=0/0, ticks=4224/0, in_queue=4224, util=99.16% 00:40:24.343 nvme0n4: ios=115/0, merge=0/0, ticks=3798/0, in_queue=3798, util=99.25% 00:40:24.601 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:24.601 16:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:24.858 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:24.858 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:25.116 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:25.116 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:25.683 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:25.683 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:25.683 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:25.683 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 448245 00:40:25.683 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:25.683 16:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:25.941 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:25.941 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:25.941 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:25.941 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:25.941 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:25.941 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:25.941 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:25.941 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:25.941 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:25.941 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:25.941 nvmf hotplug test: fio failed as expected 00:40:25.941 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:26.202 rmmod nvme_tcp 00:40:26.202 rmmod nvme_fabrics 00:40:26.202 rmmod nvme_keyring 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 446295 ']' 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 446295 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 446295 ']' 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 446295 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 446295 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 446295' 00:40:26.202 killing process with pid 446295 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 446295 00:40:26.202 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 446295 00:40:26.463 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:26.463 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:26.463 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:26.463 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:26.463 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:26.463 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:26.464 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:26.464 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:26.464 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:26.464 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:26.464 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:26.464 16:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:29.005 00:40:29.005 real 0m24.155s 00:40:29.005 user 1m8.302s 00:40:29.005 sys 0m9.876s 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:29.005 ************************************ 00:40:29.005 END TEST nvmf_fio_target 00:40:29.005 ************************************ 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:29.005 ************************************ 00:40:29.005 START TEST nvmf_bdevio 00:40:29.005 ************************************ 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:40:29.005 * Looking for test storage... 00:40:29.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:29.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.005 --rc genhtml_branch_coverage=1 00:40:29.005 --rc genhtml_function_coverage=1 00:40:29.005 --rc genhtml_legend=1 00:40:29.005 --rc geninfo_all_blocks=1 00:40:29.005 --rc geninfo_unexecuted_blocks=1 00:40:29.005 00:40:29.005 ' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:29.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.005 --rc genhtml_branch_coverage=1 00:40:29.005 --rc genhtml_function_coverage=1 00:40:29.005 --rc genhtml_legend=1 00:40:29.005 --rc geninfo_all_blocks=1 00:40:29.005 --rc geninfo_unexecuted_blocks=1 00:40:29.005 00:40:29.005 ' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:29.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.005 --rc genhtml_branch_coverage=1 00:40:29.005 --rc genhtml_function_coverage=1 00:40:29.005 --rc genhtml_legend=1 00:40:29.005 --rc geninfo_all_blocks=1 00:40:29.005 --rc geninfo_unexecuted_blocks=1 00:40:29.005 00:40:29.005 ' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:29.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:29.005 --rc genhtml_branch_coverage=1 00:40:29.005 --rc genhtml_function_coverage=1 00:40:29.005 --rc genhtml_legend=1 00:40:29.005 --rc geninfo_all_blocks=1 00:40:29.005 --rc geninfo_unexecuted_blocks=1 00:40:29.005 00:40:29.005 ' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:29.005 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:29.006 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:29.006 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:29.006 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:29.006 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:29.006 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:29.006 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:29.006 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:40:29.006 16:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:30.910 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:30.910 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:30.910 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:30.910 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:30.910 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:30.911 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:31.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:31.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:40:31.170 00:40:31.170 --- 10.0.0.2 ping statistics --- 00:40:31.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.170 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:31.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:31.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.057 ms 00:40:31.170 00:40:31.170 --- 10.0.0.1 ping statistics --- 00:40:31.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:31.170 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=450991 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 450991 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 450991 ']' 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:31.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:31.170 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:31.170 [2024-11-19 16:46:21.356028] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:31.170 [2024-11-19 16:46:21.357141] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:40:31.170 [2024-11-19 16:46:21.357200] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:31.170 [2024-11-19 16:46:21.428745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:31.170 [2024-11-19 16:46:21.475387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:31.170 [2024-11-19 16:46:21.475440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:31.170 [2024-11-19 16:46:21.475462] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:31.170 [2024-11-19 16:46:21.475473] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:31.170 [2024-11-19 16:46:21.475483] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:31.170 [2024-11-19 16:46:21.476950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:40:31.170 [2024-11-19 16:46:21.477056] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:40:31.170 [2024-11-19 16:46:21.477146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:40:31.170 [2024-11-19 16:46:21.477150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:40:31.429 [2024-11-19 16:46:21.560147] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:31.429 [2024-11-19 16:46:21.560395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:31.429 [2024-11-19 16:46:21.560653] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:31.429 [2024-11-19 16:46:21.561328] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:31.429 [2024-11-19 16:46:21.561588] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:31.429 [2024-11-19 16:46:21.617862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:31.429 Malloc0 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:31.429 [2024-11-19 16:46:21.690075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:31.429 { 00:40:31.429 "params": { 00:40:31.429 "name": "Nvme$subsystem", 00:40:31.429 "trtype": "$TEST_TRANSPORT", 00:40:31.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:31.429 "adrfam": "ipv4", 00:40:31.429 "trsvcid": "$NVMF_PORT", 00:40:31.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:31.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:31.429 "hdgst": ${hdgst:-false}, 00:40:31.429 "ddgst": ${ddgst:-false} 00:40:31.429 }, 00:40:31.429 "method": "bdev_nvme_attach_controller" 00:40:31.429 } 00:40:31.429 EOF 00:40:31.429 )") 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:40:31.429 16:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:31.429 "params": { 00:40:31.429 "name": "Nvme1", 00:40:31.429 "trtype": "tcp", 00:40:31.429 "traddr": "10.0.0.2", 00:40:31.429 "adrfam": "ipv4", 00:40:31.429 "trsvcid": "4420", 00:40:31.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:31.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:31.429 "hdgst": false, 00:40:31.429 "ddgst": false 00:40:31.429 }, 00:40:31.429 "method": "bdev_nvme_attach_controller" 00:40:31.429 }' 00:40:31.429 [2024-11-19 16:46:21.740298] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:40:31.429 [2024-11-19 16:46:21.740380] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid451053 ] 00:40:31.700 [2024-11-19 16:46:21.809725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:40:31.700 [2024-11-19 16:46:21.861025] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:31.700 [2024-11-19 16:46:21.861044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:40:31.700 [2024-11-19 16:46:21.861047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:31.959 I/O targets: 00:40:31.959 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:40:31.959 00:40:31.959 00:40:31.959 CUnit - A unit testing framework for C - Version 2.1-3 00:40:31.959 http://cunit.sourceforge.net/ 00:40:31.959 00:40:31.959 00:40:31.959 Suite: bdevio tests on: Nvme1n1 00:40:31.959 Test: blockdev write read block ...passed 00:40:31.959 Test: blockdev write zeroes read block ...passed 00:40:31.959 Test: blockdev write zeroes read no split ...passed 00:40:31.959 Test: blockdev write zeroes read split ...passed 00:40:31.959 Test: blockdev write zeroes read split partial ...passed 00:40:31.959 Test: blockdev reset ...[2024-11-19 16:46:22.193968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:40:31.959 [2024-11-19 16:46:22.194085] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x163cb70 (9): Bad file descriptor 00:40:31.959 [2024-11-19 16:46:22.198180] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:40:31.959 passed 00:40:31.959 Test: blockdev write read 8 blocks ...passed 00:40:31.959 Test: blockdev write read size > 128k ...passed 00:40:31.959 Test: blockdev write read invalid size ...passed 00:40:31.959 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:40:31.959 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:40:31.959 Test: blockdev write read max offset ...passed 00:40:32.217 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:40:32.217 Test: blockdev writev readv 8 blocks ...passed 00:40:32.217 Test: blockdev writev readv 30 x 1block ...passed 00:40:32.217 Test: blockdev writev readv block ...passed 00:40:32.217 Test: blockdev writev readv size > 128k ...passed 00:40:32.217 Test: blockdev writev readv size > 128k in two iovs ...passed 00:40:32.217 Test: blockdev comparev and writev ...[2024-11-19 16:46:22.370834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:32.217 [2024-11-19 16:46:22.370872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:40:32.217 [2024-11-19 16:46:22.370896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:32.217 [2024-11-19 16:46:22.370923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:40:32.217 [2024-11-19 16:46:22.371302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:32.217 [2024-11-19 16:46:22.371328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:40:32.217 [2024-11-19 16:46:22.371350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:32.217 [2024-11-19 16:46:22.371366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:40:32.217 [2024-11-19 16:46:22.371746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:32.217 [2024-11-19 16:46:22.371770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:40:32.217 [2024-11-19 16:46:22.371792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:32.217 [2024-11-19 16:46:22.371810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:40:32.217 [2024-11-19 16:46:22.372185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:32.217 [2024-11-19 16:46:22.372210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:40:32.217 [2024-11-19 16:46:22.372232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:40:32.217 [2024-11-19 16:46:22.372248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:40:32.217 passed 00:40:32.217 Test: blockdev nvme passthru rw ...passed 00:40:32.217 Test: blockdev nvme passthru vendor specific ...[2024-11-19 16:46:22.454308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:32.217 [2024-11-19 16:46:22.454338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:40:32.217 [2024-11-19 16:46:22.454484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:32.217 [2024-11-19 16:46:22.454506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:40:32.218 [2024-11-19 16:46:22.454649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:32.218 [2024-11-19 16:46:22.454673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:40:32.218 [2024-11-19 16:46:22.454817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:40:32.218 [2024-11-19 16:46:22.454840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:40:32.218 passed 00:40:32.218 Test: blockdev nvme admin passthru ...passed 00:40:32.218 Test: blockdev copy ...passed 00:40:32.218 00:40:32.218 Run Summary: Type Total Ran Passed Failed Inactive 00:40:32.218 suites 1 1 n/a 0 0 00:40:32.218 tests 23 23 23 0 0 00:40:32.218 asserts 152 152 152 0 n/a 00:40:32.218 00:40:32.218 Elapsed time = 0.941 seconds 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:32.475 rmmod nvme_tcp 00:40:32.475 rmmod nvme_fabrics 00:40:32.475 rmmod nvme_keyring 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 450991 ']' 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 450991 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 450991 ']' 00:40:32.475 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 450991 00:40:32.476 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:40:32.476 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:32.476 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 450991 00:40:32.476 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:40:32.476 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:40:32.476 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 450991' 00:40:32.476 killing process with pid 450991 00:40:32.476 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 450991 00:40:32.476 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 450991 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:32.734 16:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:35.268 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:35.268 00:40:35.268 real 0m6.234s 00:40:35.268 user 0m7.247s 00:40:35.268 sys 0m2.496s 00:40:35.268 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:35.268 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:40:35.269 ************************************ 00:40:35.269 END TEST nvmf_bdevio 00:40:35.269 ************************************ 00:40:35.269 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:40:35.269 00:40:35.269 real 3m54.562s 00:40:35.269 user 8m52.754s 00:40:35.269 sys 1m23.624s 00:40:35.269 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:35.269 16:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:35.269 ************************************ 00:40:35.269 END TEST nvmf_target_core_interrupt_mode 00:40:35.269 ************************************ 00:40:35.269 16:46:25 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:35.269 16:46:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:35.269 16:46:25 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:35.269 16:46:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:35.269 ************************************ 00:40:35.269 START TEST nvmf_interrupt 00:40:35.269 ************************************ 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:40:35.269 * Looking for test storage... 00:40:35.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:35.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.269 --rc genhtml_branch_coverage=1 00:40:35.269 --rc genhtml_function_coverage=1 00:40:35.269 --rc genhtml_legend=1 00:40:35.269 --rc geninfo_all_blocks=1 00:40:35.269 --rc geninfo_unexecuted_blocks=1 00:40:35.269 00:40:35.269 ' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:35.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.269 --rc genhtml_branch_coverage=1 00:40:35.269 --rc genhtml_function_coverage=1 00:40:35.269 --rc genhtml_legend=1 00:40:35.269 --rc geninfo_all_blocks=1 00:40:35.269 --rc geninfo_unexecuted_blocks=1 00:40:35.269 00:40:35.269 ' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:35.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.269 --rc genhtml_branch_coverage=1 00:40:35.269 --rc genhtml_function_coverage=1 00:40:35.269 --rc genhtml_legend=1 00:40:35.269 --rc geninfo_all_blocks=1 00:40:35.269 --rc geninfo_unexecuted_blocks=1 00:40:35.269 00:40:35.269 ' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:35.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:35.269 --rc genhtml_branch_coverage=1 00:40:35.269 --rc genhtml_function_coverage=1 00:40:35.269 --rc genhtml_legend=1 00:40:35.269 --rc geninfo_all_blocks=1 00:40:35.269 --rc geninfo_unexecuted_blocks=1 00:40:35.269 00:40:35.269 ' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:35.269 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:40:35.270 16:46:25 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:40:37.177 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:40:37.177 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:40:37.177 Found net devices under 0000:0a:00.0: cvl_0_0 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:40:37.177 Found net devices under 0000:0a:00.1: cvl_0_1 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:37.177 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:37.177 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.387 ms 00:40:37.177 00:40:37.177 --- 10.0.0.2 ping statistics --- 00:40:37.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:37.177 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:40:37.177 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:37.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:37.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:40:37.177 00:40:37.177 --- 10.0.0.1 ping statistics --- 00:40:37.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:37.178 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:40:37.178 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:37.178 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:40:37.178 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:37.178 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=453133 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 453133 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 453133 ']' 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:37.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:37.436 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:37.436 [2024-11-19 16:46:27.581814] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:37.436 [2024-11-19 16:46:27.582940] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:40:37.436 [2024-11-19 16:46:27.583016] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:37.436 [2024-11-19 16:46:27.655673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:37.436 [2024-11-19 16:46:27.699589] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:37.436 [2024-11-19 16:46:27.699652] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:37.436 [2024-11-19 16:46:27.699675] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:37.436 [2024-11-19 16:46:27.699687] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:37.436 [2024-11-19 16:46:27.699698] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:37.436 [2024-11-19 16:46:27.701191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:37.436 [2024-11-19 16:46:27.701198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:37.695 [2024-11-19 16:46:27.783046] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:37.695 [2024-11-19 16:46:27.783125] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:37.695 [2024-11-19 16:46:27.783338] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:40:37.695 5000+0 records in 00:40:37.695 5000+0 records out 00:40:37.695 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0135905 s, 753 MB/s 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:37.695 AIO0 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:37.695 [2024-11-19 16:46:27.885860] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:37.695 [2024-11-19 16:46:27.914225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 453133 0 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453133 0 idle 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453133 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453133 -w 256 00:40:37.695 16:46:27 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453133 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.25 reactor_0' 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453133 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.25 reactor_0 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 453133 1 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453133 1 idle 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453133 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453133 -w 256 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453137 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1' 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453137 root 20 0 128.2g 46848 34176 S 0.0 0.1 0:00.00 reactor_1 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=453294 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 453133 0 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 453133 0 busy 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453133 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:37.954 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:37.955 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:37.955 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453133 -w 256 00:40:37.955 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:38.226 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453133 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0' 00:40:38.226 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453133 root 20 0 128.2g 47232 34176 S 0.0 0.1 0:00.25 reactor_0 00:40:38.226 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:38.226 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:38.226 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:38.226 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:38.226 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:38.226 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:38.226 16:46:28 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:40:39.259 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:40:39.259 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:39.259 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453133 -w 256 00:40:39.259 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453133 root 20 0 128.2g 47616 34176 R 99.9 0.1 0:02.44 reactor_0' 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453133 root 20 0 128.2g 47616 34176 R 99.9 0.1 0:02.44 reactor_0 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 453133 1 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 453133 1 busy 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453133 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453133 -w 256 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453137 root 20 0 128.2g 47616 34176 R 93.3 0.1 0:01.24 reactor_1' 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453137 root 20 0 128.2g 47616 34176 R 93.3 0.1 0:01.24 reactor_1 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=93.3 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=93 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:39.519 16:46:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 453294 00:40:49.495 Initializing NVMe Controllers 00:40:49.495 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:49.495 Controller IO queue size 256, less than required. 00:40:49.495 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:40:49.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:40:49.495 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:40:49.495 Initialization complete. Launching workers. 00:40:49.495 ======================================================== 00:40:49.495 Latency(us) 00:40:49.495 Device Information : IOPS MiB/s Average min max 00:40:49.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13315.74 52.01 19241.40 4496.99 59260.36 00:40:49.495 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13637.94 53.27 18785.35 4423.68 58691.95 00:40:49.495 ======================================================== 00:40:49.495 Total : 26953.68 105.29 19010.65 4423.68 59260.36 00:40:49.495 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 453133 0 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453133 0 idle 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453133 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453133 -w 256 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453133 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:20.07 reactor_0' 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453133 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:20.07 reactor_0 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 453133 1 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453133 1 idle 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453133 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453133 -w 256 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453137 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:09.84 reactor_1' 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453137 root 20 0 128.2g 47616 34176 S 0.0 0.1 0:09.84 reactor_1 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:49.495 16:46:38 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:49.495 16:46:39 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:40:49.495 16:46:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:40:49.495 16:46:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:49.495 16:46:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:49.495 16:46:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 453133 0 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453133 0 idle 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453133 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453133 -w 256 00:40:50.874 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453133 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:20.16 reactor_0' 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453133 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:20.16 reactor_0 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 453133 1 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 453133 1 idle 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=453133 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 453133 -w 256 00:40:51.134 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 453137 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:09.87 reactor_1' 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 453137 root 20 0 128.2g 59904 34176 S 0.0 0.1 0:09.87 reactor_1 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:51.393 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:51.393 rmmod nvme_tcp 00:40:51.393 rmmod nvme_fabrics 00:40:51.393 rmmod nvme_keyring 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 453133 ']' 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 453133 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 453133 ']' 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 453133 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453133 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453133' 00:40:51.393 killing process with pid 453133 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 453133 00:40:51.393 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 453133 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:40:51.652 16:46:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:54.185 16:46:43 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:54.185 00:40:54.185 real 0m18.821s 00:40:54.185 user 0m37.250s 00:40:54.185 sys 0m6.637s 00:40:54.185 16:46:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:54.185 16:46:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:40:54.185 ************************************ 00:40:54.185 END TEST nvmf_interrupt 00:40:54.185 ************************************ 00:40:54.185 00:40:54.185 real 33m3.607s 00:40:54.185 user 87m45.135s 00:40:54.185 sys 8m5.529s 00:40:54.185 16:46:43 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:54.185 16:46:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:54.185 ************************************ 00:40:54.185 END TEST nvmf_tcp 00:40:54.185 ************************************ 00:40:54.185 16:46:43 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:40:54.185 16:46:43 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:54.185 16:46:43 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:40:54.185 16:46:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:54.185 16:46:43 -- common/autotest_common.sh@10 -- # set +x 00:40:54.185 ************************************ 00:40:54.185 START TEST spdkcli_nvmf_tcp 00:40:54.185 ************************************ 00:40:54.185 16:46:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:40:54.185 * Looking for test storage... 00:40:54.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:40:54.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.185 --rc genhtml_branch_coverage=1 00:40:54.185 --rc genhtml_function_coverage=1 00:40:54.185 --rc genhtml_legend=1 00:40:54.185 --rc geninfo_all_blocks=1 00:40:54.185 --rc geninfo_unexecuted_blocks=1 00:40:54.185 00:40:54.185 ' 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:40:54.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.185 --rc genhtml_branch_coverage=1 00:40:54.185 --rc genhtml_function_coverage=1 00:40:54.185 --rc genhtml_legend=1 00:40:54.185 --rc geninfo_all_blocks=1 00:40:54.185 --rc geninfo_unexecuted_blocks=1 00:40:54.185 00:40:54.185 ' 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:40:54.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.185 --rc genhtml_branch_coverage=1 00:40:54.185 --rc genhtml_function_coverage=1 00:40:54.185 --rc genhtml_legend=1 00:40:54.185 --rc geninfo_all_blocks=1 00:40:54.185 --rc geninfo_unexecuted_blocks=1 00:40:54.185 00:40:54.185 ' 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:40:54.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:54.185 --rc genhtml_branch_coverage=1 00:40:54.185 --rc genhtml_function_coverage=1 00:40:54.185 --rc genhtml_legend=1 00:40:54.185 --rc geninfo_all_blocks=1 00:40:54.185 --rc geninfo_unexecuted_blocks=1 00:40:54.185 00:40:54.185 ' 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:54.185 16:46:44 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:40:54.186 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=455700 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 455700 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 455700 ']' 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:54.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 [2024-11-19 16:46:44.194005] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:40:54.186 [2024-11-19 16:46:44.194131] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid455700 ] 00:40:54.186 [2024-11-19 16:46:44.262891] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:40:54.186 [2024-11-19 16:46:44.308472] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:40:54.186 [2024-11-19 16:46:44.308476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:40:54.186 16:46:44 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:40:54.186 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:40:54.186 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:40:54.186 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:40:54.186 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:40:54.186 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:40:54.186 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:40:54.186 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:54.186 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:54.186 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:40:54.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:40:54.186 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:40:54.186 ' 00:40:57.470 [2024-11-19 16:46:47.150710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:58.403 [2024-11-19 16:46:48.439150] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:00.929 [2024-11-19 16:46:50.786378] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:02.829 [2024-11-19 16:46:52.808713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:04.203 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:04.203 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:04.203 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:04.203 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:04.203 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:04.203 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:04.203 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:04.203 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:04.203 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:04.203 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:04.203 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:04.204 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:04.204 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:04.204 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:04.204 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:04.204 16:46:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:04.204 16:46:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:04.204 16:46:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:04.204 16:46:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:04.204 16:46:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:04.204 16:46:54 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:04.204 16:46:54 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:04.204 16:46:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:04.769 16:46:54 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:04.769 16:46:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:04.769 16:46:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:04.769 16:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:04.769 16:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:04.769 16:46:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:04.769 16:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:04.769 16:46:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:04.769 16:46:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:04.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:04.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:04.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:04.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:04.769 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:04.769 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:04.769 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:04.769 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:04.769 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:04.769 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:04.769 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:04.769 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:04.769 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:04.769 ' 00:41:10.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:10.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:10.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:10.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:10.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:10.033 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:10.033 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:10.033 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:10.033 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:10.033 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:10.033 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:10.033 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:10.033 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:10.033 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 455700 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 455700 ']' 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 455700 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 455700 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 455700' 00:41:10.290 killing process with pid 455700 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 455700 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 455700 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 455700 ']' 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 455700 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 455700 ']' 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 455700 00:41:10.290 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (455700) - No such process 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 455700 is not found' 00:41:10.290 Process with pid 455700 is not found 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:10.290 00:41:10.290 real 0m16.644s 00:41:10.290 user 0m35.488s 00:41:10.290 sys 0m0.834s 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:10.290 16:47:00 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:10.290 ************************************ 00:41:10.290 END TEST spdkcli_nvmf_tcp 00:41:10.290 ************************************ 00:41:10.550 16:47:00 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:10.550 16:47:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:10.550 16:47:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:10.550 16:47:00 -- common/autotest_common.sh@10 -- # set +x 00:41:10.550 ************************************ 00:41:10.550 START TEST nvmf_identify_passthru 00:41:10.550 ************************************ 00:41:10.550 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:10.550 * Looking for test storage... 00:41:10.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:10.550 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:10.550 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:41:10.550 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:10.550 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:10.550 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:10.550 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.550 --rc genhtml_branch_coverage=1 00:41:10.550 --rc genhtml_function_coverage=1 00:41:10.550 --rc genhtml_legend=1 00:41:10.550 --rc geninfo_all_blocks=1 00:41:10.550 --rc geninfo_unexecuted_blocks=1 00:41:10.550 00:41:10.550 ' 00:41:10.550 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.550 --rc genhtml_branch_coverage=1 00:41:10.550 --rc genhtml_function_coverage=1 00:41:10.550 --rc genhtml_legend=1 00:41:10.550 --rc geninfo_all_blocks=1 00:41:10.550 --rc geninfo_unexecuted_blocks=1 00:41:10.550 00:41:10.550 ' 00:41:10.550 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.550 --rc genhtml_branch_coverage=1 00:41:10.550 --rc genhtml_function_coverage=1 00:41:10.550 --rc genhtml_legend=1 00:41:10.550 --rc geninfo_all_blocks=1 00:41:10.550 --rc geninfo_unexecuted_blocks=1 00:41:10.550 00:41:10.550 ' 00:41:10.550 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:10.550 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.550 --rc genhtml_branch_coverage=1 00:41:10.550 --rc genhtml_function_coverage=1 00:41:10.550 --rc genhtml_legend=1 00:41:10.550 --rc geninfo_all_blocks=1 00:41:10.550 --rc geninfo_unexecuted_blocks=1 00:41:10.550 00:41:10.550 ' 00:41:10.550 16:47:00 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:10.550 16:47:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:10.550 16:47:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.550 16:47:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.550 16:47:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.550 16:47:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:10.550 16:47:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:10.550 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:10.550 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:10.551 16:47:00 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:10.551 16:47:00 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:10.551 16:47:00 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:10.551 16:47:00 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:10.551 16:47:00 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:10.551 16:47:00 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.551 16:47:00 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.551 16:47:00 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.551 16:47:00 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:10.551 16:47:00 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.551 16:47:00 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.551 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:10.551 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:10.551 16:47:00 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:10.551 16:47:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:13.113 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:13.113 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:13.113 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:13.113 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:13.113 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:13.114 16:47:02 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:13.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:13.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.333 ms 00:41:13.114 00:41:13.114 --- 10.0.0.2 ping statistics --- 00:41:13.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:13.114 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:13.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:13.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:41:13.114 00:41:13.114 --- 10.0.0.1 ping statistics --- 00:41:13.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:13.114 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:13.114 16:47:03 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:13.114 16:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:13.114 16:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:41:13.114 16:47:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:41:13.114 16:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:41:13.114 16:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:41:13.114 16:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:41:13.114 16:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:13.114 16:47:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:41:17.303 16:47:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:41:17.303 16:47:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:41:17.303 16:47:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:41:17.303 16:47:07 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:41:21.489 16:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:41:21.489 16:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:41:21.489 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:21.489 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:21.489 16:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:41:21.489 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:21.489 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:21.489 16:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=460313 00:41:21.489 16:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:41:21.489 16:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:41:21.489 16:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 460313 00:41:21.489 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 460313 ']' 00:41:21.489 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:21.489 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:21.489 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:21.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:21.489 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:21.489 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:21.489 [2024-11-19 16:47:11.735044] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:41:21.489 [2024-11-19 16:47:11.735153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:21.489 [2024-11-19 16:47:11.809981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:21.747 [2024-11-19 16:47:11.860966] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:21.747 [2024-11-19 16:47:11.861030] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:21.747 [2024-11-19 16:47:11.861044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:21.747 [2024-11-19 16:47:11.861055] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:21.747 [2024-11-19 16:47:11.861065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:21.747 [2024-11-19 16:47:11.862691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:41:21.747 [2024-11-19 16:47:11.862757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:41:21.747 [2024-11-19 16:47:11.862780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:41:21.747 [2024-11-19 16:47:11.862784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:21.747 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:21.747 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:41:21.748 16:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:41:21.748 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.748 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:21.748 INFO: Log level set to 20 00:41:21.748 INFO: Requests: 00:41:21.748 { 00:41:21.748 "jsonrpc": "2.0", 00:41:21.748 "method": "nvmf_set_config", 00:41:21.748 "id": 1, 00:41:21.748 "params": { 00:41:21.748 "admin_cmd_passthru": { 00:41:21.748 "identify_ctrlr": true 00:41:21.748 } 00:41:21.748 } 00:41:21.748 } 00:41:21.748 00:41:21.748 INFO: response: 00:41:21.748 { 00:41:21.748 "jsonrpc": "2.0", 00:41:21.748 "id": 1, 00:41:21.748 "result": true 00:41:21.748 } 00:41:21.748 00:41:21.748 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.748 16:47:11 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:41:21.748 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.748 16:47:11 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:21.748 INFO: Setting log level to 20 00:41:21.748 INFO: Setting log level to 20 00:41:21.748 INFO: Log level set to 20 00:41:21.748 INFO: Log level set to 20 00:41:21.748 INFO: Requests: 00:41:21.748 { 00:41:21.748 "jsonrpc": "2.0", 00:41:21.748 "method": "framework_start_init", 00:41:21.748 "id": 1 00:41:21.748 } 00:41:21.748 00:41:21.748 INFO: Requests: 00:41:21.748 { 00:41:21.748 "jsonrpc": "2.0", 00:41:21.748 "method": "framework_start_init", 00:41:21.748 "id": 1 00:41:21.748 } 00:41:21.748 00:41:21.748 [2024-11-19 16:47:12.072435] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:41:21.748 INFO: response: 00:41:21.748 { 00:41:21.748 "jsonrpc": "2.0", 00:41:21.748 "id": 1, 00:41:21.748 "result": true 00:41:21.748 } 00:41:21.748 00:41:21.748 INFO: response: 00:41:21.748 { 00:41:21.748 "jsonrpc": "2.0", 00:41:21.748 "id": 1, 00:41:21.748 "result": true 00:41:21.748 } 00:41:21.748 00:41:21.748 16:47:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:21.748 16:47:12 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:21.748 16:47:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:21.748 16:47:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:21.748 INFO: Setting log level to 40 00:41:21.748 INFO: Setting log level to 40 00:41:21.748 INFO: Setting log level to 40 00:41:21.748 [2024-11-19 16:47:12.082490] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:22.006 16:47:12 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:22.006 16:47:12 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:41:22.006 16:47:12 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:22.006 16:47:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:22.006 16:47:12 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:41:22.006 16:47:12 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:22.006 16:47:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:25.287 Nvme0n1 00:41:25.287 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.287 16:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:41:25.287 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.287 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:25.287 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.287 16:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:41:25.287 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.287 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:25.287 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.287 16:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:25.287 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.287 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:25.287 [2024-11-19 16:47:14.976644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:25.287 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.288 16:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:41:25.288 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.288 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:25.288 [ 00:41:25.288 { 00:41:25.288 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:41:25.288 "subtype": "Discovery", 00:41:25.288 "listen_addresses": [], 00:41:25.288 "allow_any_host": true, 00:41:25.288 "hosts": [] 00:41:25.288 }, 00:41:25.288 { 00:41:25.288 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:41:25.288 "subtype": "NVMe", 00:41:25.288 "listen_addresses": [ 00:41:25.288 { 00:41:25.288 "trtype": "TCP", 00:41:25.288 "adrfam": "IPv4", 00:41:25.288 "traddr": "10.0.0.2", 00:41:25.288 "trsvcid": "4420" 00:41:25.288 } 00:41:25.288 ], 00:41:25.288 "allow_any_host": true, 00:41:25.288 "hosts": [], 00:41:25.288 "serial_number": "SPDK00000000000001", 00:41:25.288 "model_number": "SPDK bdev Controller", 00:41:25.288 "max_namespaces": 1, 00:41:25.288 "min_cntlid": 1, 00:41:25.288 "max_cntlid": 65519, 00:41:25.288 "namespaces": [ 00:41:25.288 { 00:41:25.288 "nsid": 1, 00:41:25.288 "bdev_name": "Nvme0n1", 00:41:25.288 "name": "Nvme0n1", 00:41:25.288 "nguid": "06CAB445013D4AA49E6809AC025164DB", 00:41:25.288 "uuid": "06cab445-013d-4aa4-9e68-09ac025164db" 00:41:25.288 } 00:41:25.288 ] 00:41:25.288 } 00:41:25.288 ] 00:41:25.288 16:47:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.288 16:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:25.288 16:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:41:25.288 16:47:14 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:41:25.288 16:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:41:25.288 16:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:25.288 16:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:41:25.288 16:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:41:25.288 16:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:41:25.288 16:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:41:25.288 16:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:41:25.288 16:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:25.288 16:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:41:25.288 16:47:15 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:25.288 rmmod nvme_tcp 00:41:25.288 rmmod nvme_fabrics 00:41:25.288 rmmod nvme_keyring 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 460313 ']' 00:41:25.288 16:47:15 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 460313 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 460313 ']' 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 460313 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 460313 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 460313' 00:41:25.288 killing process with pid 460313 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 460313 00:41:25.288 16:47:15 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 460313 00:41:27.193 16:47:17 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:27.193 16:47:17 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:27.193 16:47:17 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:27.193 16:47:17 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:41:27.193 16:47:17 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:41:27.193 16:47:17 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:41:27.193 16:47:17 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:27.193 16:47:17 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:27.193 16:47:17 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:27.193 16:47:17 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:27.193 16:47:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:27.193 16:47:17 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:29.099 16:47:19 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:29.099 00:41:29.099 real 0m18.486s 00:41:29.099 user 0m27.648s 00:41:29.099 sys 0m2.434s 00:41:29.099 16:47:19 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:29.099 16:47:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:29.099 ************************************ 00:41:29.099 END TEST nvmf_identify_passthru 00:41:29.099 ************************************ 00:41:29.099 16:47:19 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:29.099 16:47:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:29.099 16:47:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:29.099 16:47:19 -- common/autotest_common.sh@10 -- # set +x 00:41:29.099 ************************************ 00:41:29.099 START TEST nvmf_dif 00:41:29.099 ************************************ 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:41:29.099 * Looking for test storage... 00:41:29.099 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:41:29.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.099 --rc genhtml_branch_coverage=1 00:41:29.099 --rc genhtml_function_coverage=1 00:41:29.099 --rc genhtml_legend=1 00:41:29.099 --rc geninfo_all_blocks=1 00:41:29.099 --rc geninfo_unexecuted_blocks=1 00:41:29.099 00:41:29.099 ' 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:41:29.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.099 --rc genhtml_branch_coverage=1 00:41:29.099 --rc genhtml_function_coverage=1 00:41:29.099 --rc genhtml_legend=1 00:41:29.099 --rc geninfo_all_blocks=1 00:41:29.099 --rc geninfo_unexecuted_blocks=1 00:41:29.099 00:41:29.099 ' 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:41:29.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.099 --rc genhtml_branch_coverage=1 00:41:29.099 --rc genhtml_function_coverage=1 00:41:29.099 --rc genhtml_legend=1 00:41:29.099 --rc geninfo_all_blocks=1 00:41:29.099 --rc geninfo_unexecuted_blocks=1 00:41:29.099 00:41:29.099 ' 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:41:29.099 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:29.099 --rc genhtml_branch_coverage=1 00:41:29.099 --rc genhtml_function_coverage=1 00:41:29.099 --rc genhtml_legend=1 00:41:29.099 --rc geninfo_all_blocks=1 00:41:29.099 --rc geninfo_unexecuted_blocks=1 00:41:29.099 00:41:29.099 ' 00:41:29.099 16:47:19 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:29.099 16:47:19 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:29.099 16:47:19 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.099 16:47:19 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.099 16:47:19 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.099 16:47:19 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:41:29.099 16:47:19 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:29.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:29.099 16:47:19 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:41:29.099 16:47:19 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:41:29.099 16:47:19 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:41:29.099 16:47:19 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:41:29.099 16:47:19 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:29.099 16:47:19 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:41:29.099 16:47:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:41:31.631 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:41:31.631 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:41:31.631 Found net devices under 0000:0a:00.0: cvl_0_0 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:41:31.631 Found net devices under 0000:0a:00.1: cvl_0_1 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:31.631 16:47:21 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:31.631 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:31.631 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:41:31.631 00:41:31.632 --- 10.0.0.2 ping statistics --- 00:41:31.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.632 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:41:31.632 16:47:21 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:31.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:31.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:41:31.632 00:41:31.632 --- 10.0.0.1 ping statistics --- 00:41:31.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:31.632 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:41:31.632 16:47:21 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:31.632 16:47:21 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:41:31.632 16:47:21 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:41:31.632 16:47:21 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:41:32.568 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:32.568 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:41:32.568 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:32.568 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:32.568 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:32.568 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:32.568 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:32.568 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:32.568 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:32.568 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:41:32.568 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:41:32.568 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:41:32.568 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:41:32.568 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:41:32.568 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:41:32.568 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:41:32.568 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:41:32.827 16:47:22 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:32.827 16:47:22 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:32.827 16:47:22 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:32.827 16:47:22 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:32.827 16:47:22 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:32.827 16:47:22 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:32.827 16:47:22 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:41:32.827 16:47:22 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:41:32.827 16:47:22 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:32.827 16:47:22 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:32.827 16:47:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:32.827 16:47:22 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=463476 00:41:32.827 16:47:22 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:41:32.827 16:47:22 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 463476 00:41:32.827 16:47:22 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 463476 ']' 00:41:32.827 16:47:22 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:32.827 16:47:22 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:32.827 16:47:22 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:32.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:32.827 16:47:22 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:32.827 16:47:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:32.827 [2024-11-19 16:47:22.989427] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:41:32.827 [2024-11-19 16:47:22.989524] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:32.827 [2024-11-19 16:47:23.064337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:41:32.827 [2024-11-19 16:47:23.112007] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:32.827 [2024-11-19 16:47:23.112088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:32.827 [2024-11-19 16:47:23.112114] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:32.827 [2024-11-19 16:47:23.112127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:32.827 [2024-11-19 16:47:23.112138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:32.827 [2024-11-19 16:47:23.112760] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:41:33.086 16:47:23 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:33.086 16:47:23 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:41:33.086 16:47:23 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:33.086 16:47:23 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:33.086 16:47:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:33.086 16:47:23 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:33.086 16:47:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:41:33.086 16:47:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:41:33.086 16:47:23 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.086 16:47:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:33.086 [2024-11-19 16:47:23.250471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:33.086 16:47:23 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.086 16:47:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:41:33.086 16:47:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:33.086 16:47:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:33.086 16:47:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:33.086 ************************************ 00:41:33.086 START TEST fio_dif_1_default 00:41:33.086 ************************************ 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:33.086 bdev_null0 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:33.086 [2024-11-19 16:47:23.306740] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:33.086 16:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:33.086 { 00:41:33.086 "params": { 00:41:33.086 "name": "Nvme$subsystem", 00:41:33.086 "trtype": "$TEST_TRANSPORT", 00:41:33.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:33.086 "adrfam": "ipv4", 00:41:33.086 "trsvcid": "$NVMF_PORT", 00:41:33.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:33.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:33.086 "hdgst": ${hdgst:-false}, 00:41:33.086 "ddgst": ${ddgst:-false} 00:41:33.086 }, 00:41:33.086 "method": "bdev_nvme_attach_controller" 00:41:33.086 } 00:41:33.086 EOF 00:41:33.086 )") 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:33.087 "params": { 00:41:33.087 "name": "Nvme0", 00:41:33.087 "trtype": "tcp", 00:41:33.087 "traddr": "10.0.0.2", 00:41:33.087 "adrfam": "ipv4", 00:41:33.087 "trsvcid": "4420", 00:41:33.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:33.087 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:33.087 "hdgst": false, 00:41:33.087 "ddgst": false 00:41:33.087 }, 00:41:33.087 "method": "bdev_nvme_attach_controller" 00:41:33.087 }' 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:33.087 16:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:33.345 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:33.345 fio-3.35 00:41:33.345 Starting 1 thread 00:41:45.564 00:41:45.564 filename0: (groupid=0, jobs=1): err= 0: pid=463706: Tue Nov 19 16:47:34 2024 00:41:45.564 read: IOPS=99, BW=396KiB/s (406kB/s)(3968KiB/10012msec) 00:41:45.564 slat (nsec): min=3871, max=69797, avg=9436.20, stdev=3365.48 00:41:45.564 clat (usec): min=550, max=44905, avg=40341.15, stdev=5095.98 00:41:45.564 lat (usec): min=558, max=44937, avg=40350.59, stdev=5094.96 00:41:45.564 clat percentiles (usec): 00:41:45.564 | 1.00th=[ 594], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:45.564 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:45.564 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:45.564 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:41:45.564 | 99.99th=[44827] 00:41:45.564 bw ( KiB/s): min= 384, max= 448, per=99.67%, avg=395.20, stdev=18.79, samples=20 00:41:45.564 iops : min= 96, max= 112, avg=98.80, stdev= 4.70, samples=20 00:41:45.564 lat (usec) : 750=1.61% 00:41:45.564 lat (msec) : 50=98.39% 00:41:45.564 cpu : usr=90.07%, sys=9.65%, ctx=14, majf=0, minf=225 00:41:45.564 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:45.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:45.564 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:45.564 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:45.564 00:41:45.564 Run status group 0 (all jobs): 00:41:45.564 READ: bw=396KiB/s (406kB/s), 396KiB/s-396KiB/s (406kB/s-406kB/s), io=3968KiB (4063kB), run=10012-10012msec 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.564 00:41:45.564 real 0m11.001s 00:41:45.564 user 0m9.948s 00:41:45.564 sys 0m1.213s 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:45.564 16:47:34 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:41:45.564 ************************************ 00:41:45.564 END TEST fio_dif_1_default 00:41:45.564 ************************************ 00:41:45.564 16:47:34 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:41:45.564 16:47:34 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:45.564 16:47:34 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:45.565 16:47:34 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:45.565 ************************************ 00:41:45.565 START TEST fio_dif_1_multi_subsystems 00:41:45.565 ************************************ 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:45.565 bdev_null0 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:45.565 [2024-11-19 16:47:34.359178] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:45.565 bdev_null1 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:45.565 { 00:41:45.565 "params": { 00:41:45.565 "name": "Nvme$subsystem", 00:41:45.565 "trtype": "$TEST_TRANSPORT", 00:41:45.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:45.565 "adrfam": "ipv4", 00:41:45.565 "trsvcid": "$NVMF_PORT", 00:41:45.565 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:45.565 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:45.565 "hdgst": ${hdgst:-false}, 00:41:45.565 "ddgst": ${ddgst:-false} 00:41:45.565 }, 00:41:45.565 "method": "bdev_nvme_attach_controller" 00:41:45.565 } 00:41:45.565 EOF 00:41:45.565 )") 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:45.565 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:45.565 { 00:41:45.565 "params": { 00:41:45.565 "name": "Nvme$subsystem", 00:41:45.565 "trtype": "$TEST_TRANSPORT", 00:41:45.565 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:45.566 "adrfam": "ipv4", 00:41:45.566 "trsvcid": "$NVMF_PORT", 00:41:45.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:45.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:45.566 "hdgst": ${hdgst:-false}, 00:41:45.566 "ddgst": ${ddgst:-false} 00:41:45.566 }, 00:41:45.566 "method": "bdev_nvme_attach_controller" 00:41:45.566 } 00:41:45.566 EOF 00:41:45.566 )") 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:45.566 "params": { 00:41:45.566 "name": "Nvme0", 00:41:45.566 "trtype": "tcp", 00:41:45.566 "traddr": "10.0.0.2", 00:41:45.566 "adrfam": "ipv4", 00:41:45.566 "trsvcid": "4420", 00:41:45.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:45.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:45.566 "hdgst": false, 00:41:45.566 "ddgst": false 00:41:45.566 }, 00:41:45.566 "method": "bdev_nvme_attach_controller" 00:41:45.566 },{ 00:41:45.566 "params": { 00:41:45.566 "name": "Nvme1", 00:41:45.566 "trtype": "tcp", 00:41:45.566 "traddr": "10.0.0.2", 00:41:45.566 "adrfam": "ipv4", 00:41:45.566 "trsvcid": "4420", 00:41:45.566 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:45.566 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:45.566 "hdgst": false, 00:41:45.566 "ddgst": false 00:41:45.566 }, 00:41:45.566 "method": "bdev_nvme_attach_controller" 00:41:45.566 }' 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:45.566 16:47:34 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:45.566 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:45.566 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:41:45.566 fio-3.35 00:41:45.566 Starting 2 threads 00:41:55.569 00:41:55.569 filename0: (groupid=0, jobs=1): err= 0: pid=465100: Tue Nov 19 16:47:45 2024 00:41:55.569 read: IOPS=246, BW=986KiB/s (1009kB/s)(9872KiB/10014msec) 00:41:55.569 slat (nsec): min=6885, max=90816, avg=8765.52, stdev=3296.09 00:41:55.569 clat (usec): min=516, max=42680, avg=16202.15, stdev=19775.78 00:41:55.569 lat (usec): min=523, max=42691, avg=16210.91, stdev=19775.94 00:41:55.569 clat percentiles (usec): 00:41:55.569 | 1.00th=[ 578], 5.00th=[ 603], 10.00th=[ 619], 20.00th=[ 652], 00:41:55.569 | 30.00th=[ 693], 40.00th=[ 725], 50.00th=[ 758], 60.00th=[ 873], 00:41:55.569 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:41:55.569 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:41:55.569 | 99.99th=[42730] 00:41:55.569 bw ( KiB/s): min= 704, max= 2368, per=71.33%, avg=985.60, stdev=420.14, samples=20 00:41:55.569 iops : min= 176, max= 592, avg=246.40, stdev=105.03, samples=20 00:41:55.569 lat (usec) : 750=48.06%, 1000=13.70% 00:41:55.569 lat (msec) : 2=0.16%, 50=38.09% 00:41:55.569 cpu : usr=95.00%, sys=4.67%, ctx=15, majf=0, minf=156 00:41:55.569 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.569 issued rwts: total=2468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.569 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:55.569 filename1: (groupid=0, jobs=1): err= 0: pid=465101: Tue Nov 19 16:47:45 2024 00:41:55.569 read: IOPS=98, BW=396KiB/s (405kB/s)(3968KiB/10022msec) 00:41:55.569 slat (nsec): min=6890, max=40729, avg=8896.20, stdev=3079.75 00:41:55.569 clat (usec): min=592, max=43057, avg=40381.30, stdev=5086.46 00:41:55.569 lat (usec): min=599, max=43072, avg=40390.19, stdev=5086.20 00:41:55.569 clat percentiles (usec): 00:41:55.569 | 1.00th=[ 766], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:55.569 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:55.569 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:55.569 | 99.00th=[42206], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:41:55.569 | 99.99th=[43254] 00:41:55.569 bw ( KiB/s): min= 384, max= 448, per=28.60%, avg=395.20, stdev=18.79, samples=20 00:41:55.569 iops : min= 96, max= 112, avg=98.80, stdev= 4.70, samples=20 00:41:55.569 lat (usec) : 750=0.81%, 1000=0.81% 00:41:55.569 lat (msec) : 50=98.39% 00:41:55.569 cpu : usr=95.01%, sys=4.67%, ctx=15, majf=0, minf=184 00:41:55.569 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:55.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:55.569 issued rwts: total=992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:55.569 latency : target=0, window=0, percentile=100.00%, depth=4 00:41:55.569 00:41:55.569 Run status group 0 (all jobs): 00:41:55.569 READ: bw=1381KiB/s (1414kB/s), 396KiB/s-986KiB/s (405kB/s-1009kB/s), io=13.5MiB (14.2MB), run=10014-10022msec 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.569 00:41:55.569 real 0m11.303s 00:41:55.569 user 0m20.325s 00:41:55.569 sys 0m1.242s 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:55.569 16:47:45 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:41:55.569 ************************************ 00:41:55.569 END TEST fio_dif_1_multi_subsystems 00:41:55.569 ************************************ 00:41:55.569 16:47:45 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:41:55.569 16:47:45 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:41:55.569 16:47:45 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:55.569 16:47:45 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:41:55.569 ************************************ 00:41:55.569 START TEST fio_dif_rand_params 00:41:55.569 ************************************ 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:55.569 bdev_null0 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:55.569 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:41:55.570 [2024-11-19 16:47:45.712270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:55.570 { 00:41:55.570 "params": { 00:41:55.570 "name": "Nvme$subsystem", 00:41:55.570 "trtype": "$TEST_TRANSPORT", 00:41:55.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:55.570 "adrfam": "ipv4", 00:41:55.570 "trsvcid": "$NVMF_PORT", 00:41:55.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:55.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:55.570 "hdgst": ${hdgst:-false}, 00:41:55.570 "ddgst": ${ddgst:-false} 00:41:55.570 }, 00:41:55.570 "method": "bdev_nvme_attach_controller" 00:41:55.570 } 00:41:55.570 EOF 00:41:55.570 )") 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:55.570 "params": { 00:41:55.570 "name": "Nvme0", 00:41:55.570 "trtype": "tcp", 00:41:55.570 "traddr": "10.0.0.2", 00:41:55.570 "adrfam": "ipv4", 00:41:55.570 "trsvcid": "4420", 00:41:55.570 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:41:55.570 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:41:55.570 "hdgst": false, 00:41:55.570 "ddgst": false 00:41:55.570 }, 00:41:55.570 "method": "bdev_nvme_attach_controller" 00:41:55.570 }' 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:41:55.570 16:47:45 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:41:55.830 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:41:55.830 ... 00:41:55.830 fio-3.35 00:41:55.830 Starting 3 threads 00:42:02.401 00:42:02.401 filename0: (groupid=0, jobs=1): err= 0: pid=466503: Tue Nov 19 16:47:51 2024 00:42:02.401 read: IOPS=207, BW=26.0MiB/s (27.3MB/s)(131MiB/5044msec) 00:42:02.401 slat (nsec): min=7126, max=43394, avg=15812.93, stdev=5323.87 00:42:02.401 clat (usec): min=4611, max=92692, avg=14364.36, stdev=10313.94 00:42:02.401 lat (usec): min=4618, max=92709, avg=14380.17, stdev=10313.58 00:42:02.401 clat percentiles (usec): 00:42:02.401 | 1.00th=[ 5014], 5.00th=[ 7898], 10.00th=[ 8455], 20.00th=[ 9896], 00:42:02.401 | 30.00th=[11076], 40.00th=[11731], 50.00th=[12256], 60.00th=[12780], 00:42:02.401 | 70.00th=[13304], 80.00th=[14091], 90.00th=[15795], 95.00th=[49021], 00:42:02.401 | 99.00th=[53216], 99.50th=[54789], 99.90th=[92799], 99.95th=[92799], 00:42:02.401 | 99.99th=[92799] 00:42:02.401 bw ( KiB/s): min=21504, max=30720, per=31.05%, avg=26803.20, stdev=3389.89, samples=10 00:42:02.401 iops : min= 168, max= 240, avg=209.40, stdev=26.48, samples=10 00:42:02.401 lat (msec) : 10=20.50%, 20=73.21%, 50=1.81%, 100=4.48% 00:42:02.401 cpu : usr=94.07%, sys=5.43%, ctx=15, majf=0, minf=84 00:42:02.401 IO depths : 1=0.6%, 2=99.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.401 issued rwts: total=1049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.401 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:02.401 filename0: (groupid=0, jobs=1): err= 0: pid=466504: Tue Nov 19 16:47:51 2024 00:42:02.401 read: IOPS=231, BW=28.9MiB/s (30.3MB/s)(146MiB/5044msec) 00:42:02.402 slat (usec): min=7, max=115, avg=15.19, stdev= 5.69 00:42:02.402 clat (usec): min=4552, max=54494, avg=12920.82, stdev=7702.78 00:42:02.402 lat (usec): min=4565, max=54512, avg=12936.01, stdev=7702.95 00:42:02.402 clat percentiles (usec): 00:42:02.402 | 1.00th=[ 4883], 5.00th=[ 5800], 10.00th=[ 7832], 20.00th=[ 9110], 00:42:02.402 | 30.00th=[10159], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:42:02.402 | 70.00th=[13435], 80.00th=[14615], 90.00th=[15533], 95.00th=[16581], 00:42:02.402 | 99.00th=[53216], 99.50th=[53740], 99.90th=[54264], 99.95th=[54264], 00:42:02.402 | 99.99th=[54264] 00:42:02.402 bw ( KiB/s): min=23040, max=34560, per=34.52%, avg=29798.40, stdev=4053.47, samples=10 00:42:02.402 iops : min= 180, max= 270, avg=232.80, stdev=31.67, samples=10 00:42:02.402 lat (msec) : 10=28.30%, 20=68.44%, 50=0.77%, 100=2.49% 00:42:02.402 cpu : usr=93.97%, sys=5.51%, ctx=7, majf=0, minf=188 00:42:02.402 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.402 issued rwts: total=1166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.402 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:02.402 filename0: (groupid=0, jobs=1): err= 0: pid=466505: Tue Nov 19 16:47:51 2024 00:42:02.402 read: IOPS=235, BW=29.4MiB/s (30.9MB/s)(149MiB/5047msec) 00:42:02.402 slat (usec): min=7, max=121, avg=20.20, stdev= 7.98 00:42:02.402 clat (usec): min=5120, max=93337, avg=12672.93, stdev=8551.34 00:42:02.402 lat (usec): min=5133, max=93359, avg=12693.13, stdev=8551.82 00:42:02.402 clat percentiles (usec): 00:42:02.402 | 1.00th=[ 5538], 5.00th=[ 7177], 10.00th=[ 8029], 20.00th=[ 8979], 00:42:02.402 | 30.00th=[10159], 40.00th=[10945], 50.00th=[11469], 60.00th=[11994], 00:42:02.402 | 70.00th=[12518], 80.00th=[13173], 90.00th=[14353], 95.00th=[15533], 00:42:02.402 | 99.00th=[52167], 99.50th=[53740], 99.90th=[92799], 99.95th=[93848], 00:42:02.402 | 99.99th=[93848] 00:42:02.402 bw ( KiB/s): min=21504, max=35840, per=35.17%, avg=30361.60, stdev=4376.53, samples=10 00:42:02.402 iops : min= 168, max= 280, avg=237.20, stdev=34.19, samples=10 00:42:02.402 lat (msec) : 10=28.43%, 20=67.87%, 50=1.26%, 100=2.44% 00:42:02.402 cpu : usr=91.02%, sys=7.27%, ctx=175, majf=0, minf=186 00:42:02.402 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:02.402 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.402 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:02.402 issued rwts: total=1189,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:02.402 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:02.402 00:42:02.402 Run status group 0 (all jobs): 00:42:02.402 READ: bw=84.3MiB/s (88.4MB/s), 26.0MiB/s-29.4MiB/s (27.3MB/s-30.9MB/s), io=426MiB (446MB), run=5044-5047msec 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 bdev_null0 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 [2024-11-19 16:47:51.993085] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 bdev_null1 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.402 bdev_null2 00:42:02.402 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:02.403 { 00:42:02.403 "params": { 00:42:02.403 "name": "Nvme$subsystem", 00:42:02.403 "trtype": "$TEST_TRANSPORT", 00:42:02.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:02.403 "adrfam": "ipv4", 00:42:02.403 "trsvcid": "$NVMF_PORT", 00:42:02.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:02.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:02.403 "hdgst": ${hdgst:-false}, 00:42:02.403 "ddgst": ${ddgst:-false} 00:42:02.403 }, 00:42:02.403 "method": "bdev_nvme_attach_controller" 00:42:02.403 } 00:42:02.403 EOF 00:42:02.403 )") 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:02.403 { 00:42:02.403 "params": { 00:42:02.403 "name": "Nvme$subsystem", 00:42:02.403 "trtype": "$TEST_TRANSPORT", 00:42:02.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:02.403 "adrfam": "ipv4", 00:42:02.403 "trsvcid": "$NVMF_PORT", 00:42:02.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:02.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:02.403 "hdgst": ${hdgst:-false}, 00:42:02.403 "ddgst": ${ddgst:-false} 00:42:02.403 }, 00:42:02.403 "method": "bdev_nvme_attach_controller" 00:42:02.403 } 00:42:02.403 EOF 00:42:02.403 )") 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:02.403 { 00:42:02.403 "params": { 00:42:02.403 "name": "Nvme$subsystem", 00:42:02.403 "trtype": "$TEST_TRANSPORT", 00:42:02.403 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:02.403 "adrfam": "ipv4", 00:42:02.403 "trsvcid": "$NVMF_PORT", 00:42:02.403 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:02.403 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:02.403 "hdgst": ${hdgst:-false}, 00:42:02.403 "ddgst": ${ddgst:-false} 00:42:02.403 }, 00:42:02.403 "method": "bdev_nvme_attach_controller" 00:42:02.403 } 00:42:02.403 EOF 00:42:02.403 )") 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:02.403 16:47:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:02.403 "params": { 00:42:02.403 "name": "Nvme0", 00:42:02.403 "trtype": "tcp", 00:42:02.403 "traddr": "10.0.0.2", 00:42:02.403 "adrfam": "ipv4", 00:42:02.403 "trsvcid": "4420", 00:42:02.403 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:02.403 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:02.403 "hdgst": false, 00:42:02.403 "ddgst": false 00:42:02.403 }, 00:42:02.403 "method": "bdev_nvme_attach_controller" 00:42:02.403 },{ 00:42:02.403 "params": { 00:42:02.403 "name": "Nvme1", 00:42:02.403 "trtype": "tcp", 00:42:02.403 "traddr": "10.0.0.2", 00:42:02.403 "adrfam": "ipv4", 00:42:02.403 "trsvcid": "4420", 00:42:02.403 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:02.403 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:02.403 "hdgst": false, 00:42:02.403 "ddgst": false 00:42:02.403 }, 00:42:02.403 "method": "bdev_nvme_attach_controller" 00:42:02.403 },{ 00:42:02.403 "params": { 00:42:02.403 "name": "Nvme2", 00:42:02.403 "trtype": "tcp", 00:42:02.403 "traddr": "10.0.0.2", 00:42:02.403 "adrfam": "ipv4", 00:42:02.403 "trsvcid": "4420", 00:42:02.403 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:02.403 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:02.403 "hdgst": false, 00:42:02.403 "ddgst": false 00:42:02.403 }, 00:42:02.403 "method": "bdev_nvme_attach_controller" 00:42:02.403 }' 00:42:02.404 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:02.404 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:02.404 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:02.404 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:02.404 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:02.404 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:02.404 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:02.404 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:02.404 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:02.404 16:47:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:02.404 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:02.404 ... 00:42:02.404 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:02.404 ... 00:42:02.404 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:02.404 ... 00:42:02.404 fio-3.35 00:42:02.404 Starting 24 threads 00:42:14.613 00:42:14.613 filename0: (groupid=0, jobs=1): err= 0: pid=467363: Tue Nov 19 16:48:03 2024 00:42:14.613 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:42:14.613 slat (usec): min=7, max=118, avg=40.14, stdev=31.95 00:42:14.613 clat (usec): min=15940, max=65707, avg=33455.74, stdev=2170.03 00:42:14.613 lat (usec): min=15955, max=65727, avg=33495.88, stdev=2167.40 00:42:14.613 clat percentiles (usec): 00:42:14.613 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:42:14.613 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.613 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.613 | 99.00th=[34866], 99.50th=[35390], 99.90th=[65799], 99.95th=[65799], 00:42:14.613 | 99.99th=[65799] 00:42:14.613 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1886.32, stdev=71.93, samples=19 00:42:14.613 iops : min= 416, max= 480, avg=471.58, stdev=17.98, samples=19 00:42:14.613 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:42:14.613 cpu : usr=97.58%, sys=1.56%, ctx=108, majf=0, minf=36 00:42:14.613 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.613 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.613 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.613 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.613 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.613 filename0: (groupid=0, jobs=1): err= 0: pid=467364: Tue Nov 19 16:48:03 2024 00:42:14.613 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10005msec) 00:42:14.613 slat (usec): min=10, max=127, avg=42.53, stdev=14.89 00:42:14.613 clat (usec): min=23227, max=44702, avg=33396.64, stdev=872.29 00:42:14.613 lat (usec): min=23238, max=44728, avg=33439.17, stdev=872.91 00:42:14.613 clat percentiles (usec): 00:42:14.613 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:14.614 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:14.614 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.614 | 99.00th=[34866], 99.50th=[35914], 99.90th=[44827], 99.95th=[44827], 00:42:14.614 | 99.99th=[44827] 00:42:14.614 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.05, stdev=53.61, samples=19 00:42:14.614 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:42:14.614 lat (msec) : 50=100.00% 00:42:14.614 cpu : usr=97.11%, sys=1.90%, ctx=166, majf=0, minf=34 00:42:14.614 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.614 filename0: (groupid=0, jobs=1): err= 0: pid=467365: Tue Nov 19 16:48:03 2024 00:42:14.614 read: IOPS=476, BW=1905KiB/s (1950kB/s)(18.6MiB/10013msec) 00:42:14.614 slat (usec): min=7, max=117, avg=20.97, stdev=15.52 00:42:14.614 clat (usec): min=13160, max=36131, avg=33417.88, stdev=1668.15 00:42:14.614 lat (usec): min=13216, max=36151, avg=33438.85, stdev=1665.95 00:42:14.614 clat percentiles (usec): 00:42:14.614 | 1.00th=[21890], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:42:14.614 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:42:14.614 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.614 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:42:14.614 | 99.99th=[35914] 00:42:14.614 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1900.80, stdev=62.64, samples=20 00:42:14.614 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:42:14.614 lat (msec) : 20=0.67%, 50=99.33% 00:42:14.614 cpu : usr=97.90%, sys=1.69%, ctx=26, majf=0, minf=51 00:42:14.614 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.614 filename0: (groupid=0, jobs=1): err= 0: pid=467366: Tue Nov 19 16:48:03 2024 00:42:14.614 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10013msec) 00:42:14.614 slat (usec): min=8, max=116, avg=39.32, stdev=24.02 00:42:14.614 clat (usec): min=14819, max=51347, avg=33390.23, stdev=1745.34 00:42:14.614 lat (usec): min=14872, max=51380, avg=33429.56, stdev=1741.89 00:42:14.614 clat percentiles (usec): 00:42:14.614 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:42:14.614 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.614 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.614 | 99.00th=[34866], 99.50th=[35914], 99.90th=[51119], 99.95th=[51119], 00:42:14.614 | 99.99th=[51119] 00:42:14.614 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.21, stdev=53.30, samples=19 00:42:14.614 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:42:14.614 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:42:14.614 cpu : usr=98.25%, sys=1.34%, ctx=19, majf=0, minf=26 00:42:14.614 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:14.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.614 filename0: (groupid=0, jobs=1): err= 0: pid=467367: Tue Nov 19 16:48:03 2024 00:42:14.614 read: IOPS=474, BW=1898KiB/s (1943kB/s)(18.6MiB/10016msec) 00:42:14.614 slat (usec): min=4, max=121, avg=48.23, stdev=19.70 00:42:14.614 clat (usec): min=20818, max=36231, avg=33271.90, stdev=917.68 00:42:14.614 lat (usec): min=20832, max=36244, avg=33320.13, stdev=917.58 00:42:14.614 clat percentiles (usec): 00:42:14.614 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:42:14.614 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:14.614 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:42:14.614 | 99.00th=[34866], 99.50th=[35390], 99.90th=[36439], 99.95th=[36439], 00:42:14.614 | 99.99th=[36439] 00:42:14.614 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.05, stdev=53.61, samples=19 00:42:14.614 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:42:14.614 lat (msec) : 50=100.00% 00:42:14.614 cpu : usr=97.82%, sys=1.44%, ctx=62, majf=0, minf=21 00:42:14.614 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:14.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.614 filename0: (groupid=0, jobs=1): err= 0: pid=467368: Tue Nov 19 16:48:03 2024 00:42:14.614 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10012msec) 00:42:14.614 slat (nsec): min=8765, max=72121, avg=33106.74, stdev=11387.23 00:42:14.614 clat (usec): min=14462, max=49541, avg=33412.89, stdev=1669.98 00:42:14.614 lat (usec): min=14483, max=49560, avg=33445.99, stdev=1669.00 00:42:14.614 clat percentiles (usec): 00:42:14.614 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:14.614 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.614 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.614 | 99.00th=[34866], 99.50th=[35914], 99.90th=[49546], 99.95th=[49546], 00:42:14.614 | 99.99th=[49546] 00:42:14.614 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.05, stdev=53.61, samples=19 00:42:14.614 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:42:14.614 lat (msec) : 20=0.34%, 50=99.66% 00:42:14.614 cpu : usr=97.38%, sys=1.70%, ctx=109, majf=0, minf=36 00:42:14.614 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:14.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.614 filename0: (groupid=0, jobs=1): err= 0: pid=467369: Tue Nov 19 16:48:03 2024 00:42:14.614 read: IOPS=476, BW=1905KiB/s (1950kB/s)(18.6MiB/10014msec) 00:42:14.614 slat (usec): min=9, max=104, avg=41.75, stdev=14.61 00:42:14.614 clat (usec): min=12811, max=36023, avg=33250.76, stdev=1643.48 00:42:14.614 lat (usec): min=12833, max=36054, avg=33292.51, stdev=1643.78 00:42:14.614 clat percentiles (usec): 00:42:14.614 | 1.00th=[23462], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:14.614 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.614 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.614 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:42:14.614 | 99.99th=[35914] 00:42:14.614 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1900.80, stdev=62.64, samples=20 00:42:14.614 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:42:14.614 lat (msec) : 20=0.67%, 50=99.33% 00:42:14.614 cpu : usr=96.91%, sys=2.02%, ctx=129, majf=0, minf=53 00:42:14.614 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.614 filename0: (groupid=0, jobs=1): err= 0: pid=467370: Tue Nov 19 16:48:03 2024 00:42:14.614 read: IOPS=476, BW=1905KiB/s (1950kB/s)(18.6MiB/10014msec) 00:42:14.614 slat (usec): min=12, max=112, avg=42.78, stdev=14.00 00:42:14.614 clat (usec): min=13486, max=36006, avg=33212.92, stdev=1640.49 00:42:14.614 lat (usec): min=13536, max=36051, avg=33255.70, stdev=1640.55 00:42:14.614 clat percentiles (usec): 00:42:14.614 | 1.00th=[23200], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:42:14.614 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:14.614 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.614 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:42:14.614 | 99.99th=[35914] 00:42:14.614 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1900.80, stdev=62.64, samples=20 00:42:14.614 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:42:14.614 lat (msec) : 20=0.67%, 50=99.33% 00:42:14.614 cpu : usr=98.27%, sys=1.32%, ctx=23, majf=0, minf=47 00:42:14.614 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.614 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.614 filename1: (groupid=0, jobs=1): err= 0: pid=467371: Tue Nov 19 16:48:03 2024 00:42:14.614 read: IOPS=476, BW=1905KiB/s (1950kB/s)(18.6MiB/10013msec) 00:42:14.614 slat (usec): min=8, max=106, avg=21.31, stdev=14.16 00:42:14.614 clat (usec): min=13113, max=36027, avg=33430.40, stdev=1669.72 00:42:14.614 lat (usec): min=13125, max=36044, avg=33451.72, stdev=1669.20 00:42:14.614 clat percentiles (usec): 00:42:14.614 | 1.00th=[21365], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:42:14.614 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:42:14.614 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.614 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:42:14.614 | 99.99th=[35914] 00:42:14.614 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1900.80, stdev=62.64, samples=20 00:42:14.614 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:42:14.614 lat (msec) : 20=0.82%, 50=99.18% 00:42:14.614 cpu : usr=98.06%, sys=1.55%, ctx=23, majf=0, minf=62 00:42:14.614 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.614 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.614 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.615 filename1: (groupid=0, jobs=1): err= 0: pid=467372: Tue Nov 19 16:48:03 2024 00:42:14.615 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10012msec) 00:42:14.615 slat (usec): min=8, max=100, avg=38.70, stdev=16.00 00:42:14.615 clat (usec): min=14530, max=49624, avg=33368.76, stdev=1681.28 00:42:14.615 lat (usec): min=14564, max=49643, avg=33407.47, stdev=1679.89 00:42:14.615 clat percentiles (usec): 00:42:14.615 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:14.615 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.615 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.615 | 99.00th=[34866], 99.50th=[35914], 99.90th=[49546], 99.95th=[49546], 00:42:14.615 | 99.99th=[49546] 00:42:14.615 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.05, stdev=53.61, samples=19 00:42:14.615 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:42:14.615 lat (msec) : 20=0.34%, 50=99.66% 00:42:14.615 cpu : usr=98.30%, sys=1.30%, ctx=15, majf=0, minf=31 00:42:14.615 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.615 filename1: (groupid=0, jobs=1): err= 0: pid=467373: Tue Nov 19 16:48:03 2024 00:42:14.615 read: IOPS=476, BW=1905KiB/s (1950kB/s)(18.6MiB/10014msec) 00:42:14.615 slat (usec): min=13, max=102, avg=41.79, stdev=13.71 00:42:14.615 clat (usec): min=13393, max=36023, avg=33233.40, stdev=1650.08 00:42:14.615 lat (usec): min=13423, max=36063, avg=33275.19, stdev=1651.18 00:42:14.615 clat percentiles (usec): 00:42:14.615 | 1.00th=[21890], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:42:14.615 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.615 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.615 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:42:14.615 | 99.99th=[35914] 00:42:14.615 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1900.80, stdev=62.64, samples=20 00:42:14.615 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:42:14.615 lat (msec) : 20=0.67%, 50=99.33% 00:42:14.615 cpu : usr=97.10%, sys=1.99%, ctx=108, majf=0, minf=54 00:42:14.615 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.615 filename1: (groupid=0, jobs=1): err= 0: pid=467374: Tue Nov 19 16:48:03 2024 00:42:14.615 read: IOPS=473, BW=1892KiB/s (1938kB/s)(18.5MiB/10011msec) 00:42:14.615 slat (usec): min=12, max=128, avg=77.06, stdev=11.81 00:42:14.615 clat (usec): min=16476, max=85938, avg=33142.13, stdev=2321.78 00:42:14.615 lat (usec): min=16540, max=85971, avg=33219.18, stdev=2320.03 00:42:14.615 clat percentiles (usec): 00:42:14.615 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:42:14.615 | 30.00th=[32637], 40.00th=[32900], 50.00th=[33162], 60.00th=[33162], 00:42:14.615 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:42:14.615 | 99.00th=[34866], 99.50th=[34866], 99.90th=[65274], 99.95th=[65799], 00:42:14.615 | 99.99th=[85459] 00:42:14.615 bw ( KiB/s): min= 1664, max= 1936, per=4.14%, avg=1886.32, stdev=72.13, samples=19 00:42:14.615 iops : min= 416, max= 484, avg=471.58, stdev=18.03, samples=19 00:42:14.615 lat (msec) : 20=0.38%, 50=99.28%, 100=0.34% 00:42:14.615 cpu : usr=98.14%, sys=1.39%, ctx=14, majf=0, minf=28 00:42:14.615 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:42:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.615 filename1: (groupid=0, jobs=1): err= 0: pid=467375: Tue Nov 19 16:48:03 2024 00:42:14.615 read: IOPS=476, BW=1905KiB/s (1950kB/s)(18.6MiB/10014msec) 00:42:14.615 slat (usec): min=9, max=104, avg=41.25, stdev=13.47 00:42:14.615 clat (usec): min=13438, max=36052, avg=33250.93, stdev=1651.65 00:42:14.615 lat (usec): min=13467, max=36082, avg=33292.17, stdev=1652.33 00:42:14.615 clat percentiles (usec): 00:42:14.615 | 1.00th=[21890], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:14.615 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.615 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.615 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:42:14.615 | 99.99th=[35914] 00:42:14.615 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1900.80, stdev=62.64, samples=20 00:42:14.615 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:42:14.615 lat (msec) : 20=0.67%, 50=99.33% 00:42:14.615 cpu : usr=98.29%, sys=1.30%, ctx=12, majf=0, minf=30 00:42:14.615 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.615 filename1: (groupid=0, jobs=1): err= 0: pid=467376: Tue Nov 19 16:48:03 2024 00:42:14.615 read: IOPS=474, BW=1899KiB/s (1944kB/s)(18.6MiB/10012msec) 00:42:14.615 slat (nsec): min=7908, max=68167, avg=14624.64, stdev=6965.05 00:42:14.615 clat (usec): min=14824, max=49488, avg=33581.57, stdev=1655.56 00:42:14.615 lat (usec): min=14838, max=49508, avg=33596.20, stdev=1655.70 00:42:14.615 clat percentiles (usec): 00:42:14.615 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:42:14.615 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:42:14.615 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.615 | 99.00th=[34866], 99.50th=[35914], 99.90th=[49546], 99.95th=[49546], 00:42:14.615 | 99.99th=[49546] 00:42:14.615 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.05, stdev=53.61, samples=19 00:42:14.615 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:42:14.615 lat (msec) : 20=0.34%, 50=99.66% 00:42:14.615 cpu : usr=98.17%, sys=1.42%, ctx=14, majf=0, minf=46 00:42:14.615 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.615 filename1: (groupid=0, jobs=1): err= 0: pid=467377: Tue Nov 19 16:48:03 2024 00:42:14.615 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10002msec) 00:42:14.615 slat (usec): min=7, max=117, avg=38.15, stdev=15.25 00:42:14.615 clat (usec): min=21231, max=54680, avg=33433.45, stdev=1501.98 00:42:14.615 lat (usec): min=21249, max=54703, avg=33471.60, stdev=1501.16 00:42:14.615 clat percentiles (usec): 00:42:14.615 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:14.615 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.615 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.615 | 99.00th=[34866], 99.50th=[35914], 99.90th=[54789], 99.95th=[54789], 00:42:14.615 | 99.99th=[54789] 00:42:14.615 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.05, stdev=53.61, samples=19 00:42:14.615 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:42:14.615 lat (msec) : 50=99.66%, 100=0.34% 00:42:14.615 cpu : usr=96.69%, sys=2.20%, ctx=217, majf=0, minf=37 00:42:14.615 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.615 filename1: (groupid=0, jobs=1): err= 0: pid=467378: Tue Nov 19 16:48:03 2024 00:42:14.615 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10018msec) 00:42:14.615 slat (usec): min=4, max=118, avg=50.61, stdev=20.45 00:42:14.615 clat (usec): min=20829, max=48105, avg=33263.92, stdev=1009.05 00:42:14.615 lat (usec): min=20843, max=48119, avg=33314.53, stdev=1007.30 00:42:14.615 clat percentiles (usec): 00:42:14.615 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:42:14.615 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:14.615 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.615 | 99.00th=[34866], 99.50th=[35914], 99.90th=[36963], 99.95th=[36963], 00:42:14.615 | 99.99th=[47973] 00:42:14.615 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1893.05, stdev=68.52, samples=19 00:42:14.615 iops : min= 448, max= 512, avg=473.26, stdev=17.13, samples=19 00:42:14.615 lat (msec) : 50=100.00% 00:42:14.615 cpu : usr=97.77%, sys=1.55%, ctx=63, majf=0, minf=40 00:42:14.615 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.615 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.615 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.615 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.615 filename2: (groupid=0, jobs=1): err= 0: pid=467379: Tue Nov 19 16:48:03 2024 00:42:14.615 read: IOPS=473, BW=1894KiB/s (1939kB/s)(18.5MiB/10004msec) 00:42:14.615 slat (usec): min=8, max=107, avg=38.84, stdev=21.03 00:42:14.615 clat (usec): min=21278, max=57019, avg=33478.77, stdev=1625.93 00:42:14.615 lat (usec): min=21313, max=57060, avg=33517.61, stdev=1622.66 00:42:14.615 clat percentiles (usec): 00:42:14.615 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:14.615 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.615 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.615 | 99.00th=[34866], 99.50th=[35914], 99.90th=[56886], 99.95th=[56886], 00:42:14.615 | 99.99th=[56886] 00:42:14.615 bw ( KiB/s): min= 1788, max= 1920, per=4.14%, avg=1886.11, stdev=58.28, samples=19 00:42:14.616 iops : min= 447, max= 480, avg=471.53, stdev=14.57, samples=19 00:42:14.616 lat (msec) : 50=99.66%, 100=0.34% 00:42:14.616 cpu : usr=98.51%, sys=1.09%, ctx=14, majf=0, minf=28 00:42:14.616 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.616 filename2: (groupid=0, jobs=1): err= 0: pid=467380: Tue Nov 19 16:48:03 2024 00:42:14.616 read: IOPS=473, BW=1893KiB/s (1939kB/s)(18.5MiB/10007msec) 00:42:14.616 slat (usec): min=4, max=153, avg=79.98, stdev=13.66 00:42:14.616 clat (usec): min=31673, max=47166, avg=33087.89, stdev=1014.58 00:42:14.616 lat (usec): min=31748, max=47191, avg=33167.87, stdev=1012.03 00:42:14.616 clat percentiles (usec): 00:42:14.616 | 1.00th=[31851], 5.00th=[32113], 10.00th=[32375], 20.00th=[32375], 00:42:14.616 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:42:14.616 | 70.00th=[33424], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:42:14.616 | 99.00th=[34341], 99.50th=[35390], 99.90th=[46924], 99.95th=[46924], 00:42:14.616 | 99.99th=[46924] 00:42:14.616 bw ( KiB/s): min= 1792, max= 1923, per=4.15%, avg=1888.60, stdev=56.88, samples=20 00:42:14.616 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:42:14.616 lat (msec) : 50=100.00% 00:42:14.616 cpu : usr=98.24%, sys=1.29%, ctx=15, majf=0, minf=29 00:42:14.616 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.616 filename2: (groupid=0, jobs=1): err= 0: pid=467381: Tue Nov 19 16:48:03 2024 00:42:14.616 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10014msec) 00:42:14.616 slat (nsec): min=7963, max=77594, avg=31499.05, stdev=13105.82 00:42:14.616 clat (usec): min=14528, max=52141, avg=33417.29, stdev=1751.27 00:42:14.616 lat (usec): min=14542, max=52176, avg=33448.79, stdev=1751.21 00:42:14.616 clat percentiles (usec): 00:42:14.616 | 1.00th=[32637], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:14.616 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.616 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.616 | 99.00th=[34866], 99.50th=[35914], 99.90th=[52167], 99.95th=[52167], 00:42:14.616 | 99.99th=[52167] 00:42:14.616 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.21, stdev=53.30, samples=19 00:42:14.616 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:42:14.616 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:42:14.616 cpu : usr=97.19%, sys=1.87%, ctx=132, majf=0, minf=36 00:42:14.616 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:42:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.616 filename2: (groupid=0, jobs=1): err= 0: pid=467382: Tue Nov 19 16:48:03 2024 00:42:14.616 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10010msec) 00:42:14.616 slat (nsec): min=7545, max=83158, avg=15732.17, stdev=10484.42 00:42:14.616 clat (usec): min=16095, max=65628, avg=33667.57, stdev=2137.35 00:42:14.616 lat (usec): min=16105, max=65665, avg=33683.30, stdev=2137.60 00:42:14.616 clat percentiles (usec): 00:42:14.616 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:42:14.616 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:42:14.616 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.616 | 99.00th=[34866], 99.50th=[35390], 99.90th=[65274], 99.95th=[65799], 00:42:14.616 | 99.99th=[65799] 00:42:14.616 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1886.32, stdev=70.53, samples=19 00:42:14.616 iops : min= 416, max= 480, avg=471.58, stdev=17.63, samples=19 00:42:14.616 lat (msec) : 20=0.38%, 50=99.24%, 100=0.38% 00:42:14.616 cpu : usr=97.56%, sys=1.69%, ctx=80, majf=0, minf=30 00:42:14.616 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.616 filename2: (groupid=0, jobs=1): err= 0: pid=467383: Tue Nov 19 16:48:03 2024 00:42:14.616 read: IOPS=476, BW=1905KiB/s (1950kB/s)(18.6MiB/10014msec) 00:42:14.616 slat (usec): min=8, max=117, avg=40.13, stdev=15.90 00:42:14.616 clat (usec): min=13265, max=36014, avg=33248.66, stdev=1652.13 00:42:14.616 lat (usec): min=13318, max=36056, avg=33288.78, stdev=1652.62 00:42:14.616 clat percentiles (usec): 00:42:14.616 | 1.00th=[21890], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:42:14.616 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.616 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.616 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:42:14.616 | 99.99th=[35914] 00:42:14.616 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1900.80, stdev=62.64, samples=20 00:42:14.616 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:42:14.616 lat (msec) : 20=0.67%, 50=99.33% 00:42:14.616 cpu : usr=98.07%, sys=1.52%, ctx=18, majf=0, minf=44 00:42:14.616 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.616 filename2: (groupid=0, jobs=1): err= 0: pid=467384: Tue Nov 19 16:48:03 2024 00:42:14.616 read: IOPS=474, BW=1898KiB/s (1944kB/s)(18.6MiB/10014msec) 00:42:14.616 slat (nsec): min=8633, max=96410, avg=38985.83, stdev=16671.83 00:42:14.616 clat (usec): min=14552, max=57012, avg=33354.11, stdev=1806.61 00:42:14.616 lat (usec): min=14573, max=57046, avg=33393.10, stdev=1806.07 00:42:14.616 clat percentiles (usec): 00:42:14.616 | 1.00th=[31851], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:14.616 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.616 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.616 | 99.00th=[34866], 99.50th=[35914], 99.90th=[52167], 99.95th=[52167], 00:42:14.616 | 99.99th=[56886] 00:42:14.616 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1893.05, stdev=53.61, samples=19 00:42:14.616 iops : min= 448, max= 480, avg=473.26, stdev=13.40, samples=19 00:42:14.616 lat (msec) : 20=0.34%, 50=99.33%, 100=0.34% 00:42:14.616 cpu : usr=96.85%, sys=1.96%, ctx=214, majf=0, minf=31 00:42:14.616 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.616 filename2: (groupid=0, jobs=1): err= 0: pid=467385: Tue Nov 19 16:48:03 2024 00:42:14.616 read: IOPS=475, BW=1903KiB/s (1948kB/s)(18.6MiB/10024msec) 00:42:14.616 slat (usec): min=8, max=137, avg=54.25, stdev=24.84 00:42:14.616 clat (usec): min=12874, max=44044, avg=33140.47, stdev=1673.10 00:42:14.616 lat (usec): min=12894, max=44093, avg=33194.72, stdev=1674.66 00:42:14.616 clat percentiles (usec): 00:42:14.616 | 1.00th=[23462], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:42:14.616 | 30.00th=[32900], 40.00th=[33162], 50.00th=[33162], 60.00th=[33424], 00:42:14.616 | 70.00th=[33424], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:42:14.616 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[43254], 00:42:14.616 | 99.99th=[44303] 00:42:14.616 bw ( KiB/s): min= 1792, max= 2048, per=4.17%, avg=1900.80, stdev=62.64, samples=20 00:42:14.616 iops : min= 448, max= 512, avg=475.20, stdev=15.66, samples=20 00:42:14.616 lat (msec) : 20=0.67%, 50=99.33% 00:42:14.616 cpu : usr=98.18%, sys=1.39%, ctx=14, majf=0, minf=43 00:42:14.616 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:42:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.616 filename2: (groupid=0, jobs=1): err= 0: pid=467386: Tue Nov 19 16:48:03 2024 00:42:14.616 read: IOPS=476, BW=1905KiB/s (1950kB/s)(18.6MiB/10014msec) 00:42:14.616 slat (usec): min=9, max=111, avg=43.41, stdev=17.46 00:42:14.616 clat (usec): min=12821, max=43821, avg=33259.29, stdev=1696.60 00:42:14.616 lat (usec): min=12839, max=43868, avg=33302.71, stdev=1696.38 00:42:14.616 clat percentiles (usec): 00:42:14.616 | 1.00th=[23200], 5.00th=[32637], 10.00th=[32900], 20.00th=[32900], 00:42:14.616 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:42:14.616 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:42:14.616 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[42730], 00:42:14.616 | 99.99th=[43779] 00:42:14.616 bw ( KiB/s): min= 1792, max= 2032, per=4.17%, avg=1900.80, stdev=59.32, samples=20 00:42:14.616 iops : min= 448, max= 508, avg=475.20, stdev=14.83, samples=20 00:42:14.616 lat (msec) : 20=0.67%, 50=99.33% 00:42:14.616 cpu : usr=97.23%, sys=1.93%, ctx=114, majf=0, minf=68 00:42:14.616 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:42:14.616 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:14.616 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:14.616 latency : target=0, window=0, percentile=100.00%, depth=16 00:42:14.616 00:42:14.616 Run status group 0 (all jobs): 00:42:14.616 READ: bw=44.5MiB/s (46.6MB/s), 1892KiB/s-1905KiB/s (1938kB/s-1950kB/s), io=446MiB (467MB), run=10002-10024msec 00:42:14.616 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:42:14.616 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 bdev_null0 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 [2024-11-19 16:48:03.518537] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 bdev_null1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:14.617 { 00:42:14.617 "params": { 00:42:14.617 "name": "Nvme$subsystem", 00:42:14.617 "trtype": "$TEST_TRANSPORT", 00:42:14.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:14.617 "adrfam": "ipv4", 00:42:14.617 "trsvcid": "$NVMF_PORT", 00:42:14.617 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:14.617 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:14.617 "hdgst": ${hdgst:-false}, 00:42:14.617 "ddgst": ${ddgst:-false} 00:42:14.617 }, 00:42:14.617 "method": "bdev_nvme_attach_controller" 00:42:14.617 } 00:42:14.617 EOF 00:42:14.617 )") 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:14.617 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:14.617 { 00:42:14.617 "params": { 00:42:14.617 "name": "Nvme$subsystem", 00:42:14.617 "trtype": "$TEST_TRANSPORT", 00:42:14.617 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:14.617 "adrfam": "ipv4", 00:42:14.617 "trsvcid": "$NVMF_PORT", 00:42:14.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:14.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:14.618 "hdgst": ${hdgst:-false}, 00:42:14.618 "ddgst": ${ddgst:-false} 00:42:14.618 }, 00:42:14.618 "method": "bdev_nvme_attach_controller" 00:42:14.618 } 00:42:14.618 EOF 00:42:14.618 )") 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:14.618 "params": { 00:42:14.618 "name": "Nvme0", 00:42:14.618 "trtype": "tcp", 00:42:14.618 "traddr": "10.0.0.2", 00:42:14.618 "adrfam": "ipv4", 00:42:14.618 "trsvcid": "4420", 00:42:14.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:14.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:14.618 "hdgst": false, 00:42:14.618 "ddgst": false 00:42:14.618 }, 00:42:14.618 "method": "bdev_nvme_attach_controller" 00:42:14.618 },{ 00:42:14.618 "params": { 00:42:14.618 "name": "Nvme1", 00:42:14.618 "trtype": "tcp", 00:42:14.618 "traddr": "10.0.0.2", 00:42:14.618 "adrfam": "ipv4", 00:42:14.618 "trsvcid": "4420", 00:42:14.618 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:14.618 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:14.618 "hdgst": false, 00:42:14.618 "ddgst": false 00:42:14.618 }, 00:42:14.618 "method": "bdev_nvme_attach_controller" 00:42:14.618 }' 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:14.618 16:48:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:14.618 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:14.618 ... 00:42:14.618 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:42:14.618 ... 00:42:14.618 fio-3.35 00:42:14.618 Starting 4 threads 00:42:19.911 00:42:19.911 filename0: (groupid=0, jobs=1): err= 0: pid=468838: Tue Nov 19 16:48:09 2024 00:42:19.911 read: IOPS=1954, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5002msec) 00:42:19.911 slat (nsec): min=4160, max=76224, avg=13295.58, stdev=5900.56 00:42:19.911 clat (usec): min=911, max=7588, avg=4052.66, stdev=301.58 00:42:19.911 lat (usec): min=924, max=7602, avg=4065.95, stdev=301.69 00:42:19.911 clat percentiles (usec): 00:42:19.911 | 1.00th=[ 3359], 5.00th=[ 3720], 10.00th=[ 3884], 20.00th=[ 3982], 00:42:19.911 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4015], 60.00th=[ 4047], 00:42:19.911 | 70.00th=[ 4080], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4359], 00:42:19.911 | 99.00th=[ 5145], 99.50th=[ 5735], 99.90th=[ 7111], 99.95th=[ 7242], 00:42:19.911 | 99.99th=[ 7570] 00:42:19.911 bw ( KiB/s): min=15120, max=15872, per=25.03%, avg=15630.30, stdev=212.00, samples=10 00:42:19.911 iops : min= 1890, max= 1984, avg=1953.70, stdev=26.45, samples=10 00:42:19.911 lat (usec) : 1000=0.01% 00:42:19.911 lat (msec) : 2=0.08%, 4=33.55%, 10=66.35% 00:42:19.911 cpu : usr=94.78%, sys=4.74%, ctx=13, majf=0, minf=0 00:42:19.911 IO depths : 1=0.2%, 2=6.8%, 4=65.9%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.911 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.911 issued rwts: total=9775,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:19.911 filename0: (groupid=0, jobs=1): err= 0: pid=468839: Tue Nov 19 16:48:09 2024 00:42:19.911 read: IOPS=1944, BW=15.2MiB/s (15.9MB/s)(76.0MiB/5001msec) 00:42:19.911 slat (nsec): min=4049, max=62685, avg=16942.99, stdev=5990.99 00:42:19.911 clat (usec): min=783, max=9144, avg=4049.58, stdev=491.22 00:42:19.911 lat (usec): min=796, max=9156, avg=4066.53, stdev=491.40 00:42:19.911 clat percentiles (usec): 00:42:19.911 | 1.00th=[ 2057], 5.00th=[ 3720], 10.00th=[ 3916], 20.00th=[ 3949], 00:42:19.911 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4015], 00:42:19.911 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4490], 00:42:19.911 | 99.00th=[ 6390], 99.50th=[ 6915], 99.90th=[ 7308], 99.95th=[ 7373], 00:42:19.911 | 99.99th=[ 9110] 00:42:19.911 bw ( KiB/s): min=14896, max=15856, per=24.97%, avg=15591.11, stdev=283.53, samples=9 00:42:19.911 iops : min= 1862, max= 1982, avg=1948.67, stdev=35.48, samples=9 00:42:19.911 lat (usec) : 1000=0.11% 00:42:19.911 lat (msec) : 2=0.81%, 4=45.67%, 10=53.40% 00:42:19.911 cpu : usr=91.88%, sys=6.44%, ctx=122, majf=0, minf=9 00:42:19.911 IO depths : 1=0.4%, 2=22.4%, 4=51.7%, 8=25.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.911 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.911 issued rwts: total=9722,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:19.911 filename1: (groupid=0, jobs=1): err= 0: pid=468841: Tue Nov 19 16:48:09 2024 00:42:19.911 read: IOPS=1955, BW=15.3MiB/s (16.0MB/s)(76.4MiB/5002msec) 00:42:19.911 slat (nsec): min=3808, max=65228, avg=16285.22, stdev=6313.17 00:42:19.911 clat (usec): min=701, max=7530, avg=4022.61, stdev=469.15 00:42:19.911 lat (usec): min=714, max=7538, avg=4038.90, stdev=469.41 00:42:19.911 clat percentiles (usec): 00:42:19.911 | 1.00th=[ 1991], 5.00th=[ 3687], 10.00th=[ 3884], 20.00th=[ 3949], 00:42:19.911 | 30.00th=[ 3949], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4015], 00:42:19.911 | 70.00th=[ 4047], 80.00th=[ 4080], 90.00th=[ 4228], 95.00th=[ 4293], 00:42:19.911 | 99.00th=[ 6390], 99.50th=[ 6915], 99.90th=[ 7242], 99.95th=[ 7308], 00:42:19.911 | 99.99th=[ 7504] 00:42:19.911 bw ( KiB/s): min=15552, max=15744, per=25.14%, avg=15697.78, stdev=68.09, samples=9 00:42:19.911 iops : min= 1944, max= 1968, avg=1962.22, stdev= 8.51, samples=9 00:42:19.911 lat (usec) : 750=0.01%, 1000=0.11% 00:42:19.911 lat (msec) : 2=0.88%, 4=48.20%, 10=50.80% 00:42:19.911 cpu : usr=95.14%, sys=4.38%, ctx=7, majf=0, minf=0 00:42:19.911 IO depths : 1=1.1%, 2=23.6%, 4=51.0%, 8=24.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.911 complete : 0=0.0%, 4=90.3%, 8=9.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.911 issued rwts: total=9782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:19.911 filename1: (groupid=0, jobs=1): err= 0: pid=468842: Tue Nov 19 16:48:09 2024 00:42:19.911 read: IOPS=1952, BW=15.3MiB/s (16.0MB/s)(76.3MiB/5003msec) 00:42:19.911 slat (nsec): min=3748, max=65143, avg=15330.02, stdev=5628.43 00:42:19.911 clat (usec): min=719, max=7174, avg=4038.64, stdev=350.03 00:42:19.911 lat (usec): min=733, max=7188, avg=4053.97, stdev=350.09 00:42:19.911 clat percentiles (usec): 00:42:19.911 | 1.00th=[ 3097], 5.00th=[ 3752], 10.00th=[ 3916], 20.00th=[ 3949], 00:42:19.911 | 30.00th=[ 3982], 40.00th=[ 3982], 50.00th=[ 4015], 60.00th=[ 4047], 00:42:19.911 | 70.00th=[ 4047], 80.00th=[ 4113], 90.00th=[ 4228], 95.00th=[ 4359], 00:42:19.911 | 99.00th=[ 5473], 99.50th=[ 6259], 99.90th=[ 6849], 99.95th=[ 7111], 00:42:19.911 | 99.99th=[ 7177] 00:42:19.911 bw ( KiB/s): min=15104, max=15872, per=25.01%, avg=15619.20, stdev=281.69, samples=10 00:42:19.911 iops : min= 1888, max= 1984, avg=1952.40, stdev=35.21, samples=10 00:42:19.911 lat (usec) : 750=0.01%, 1000=0.02% 00:42:19.911 lat (msec) : 2=0.27%, 4=42.47%, 10=57.24% 00:42:19.911 cpu : usr=94.76%, sys=4.74%, ctx=8, majf=0, minf=0 00:42:19.911 IO depths : 1=0.4%, 2=22.8%, 4=51.5%, 8=25.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:19.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.911 complete : 0=0.0%, 4=90.6%, 8=9.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:19.911 issued rwts: total=9770,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:19.911 latency : target=0, window=0, percentile=100.00%, depth=8 00:42:19.911 00:42:19.911 Run status group 0 (all jobs): 00:42:19.911 READ: bw=61.0MiB/s (63.9MB/s), 15.2MiB/s-15.3MiB/s (15.9MB/s-16.0MB/s), io=305MiB (320MB), run=5001-5003msec 00:42:19.911 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:42:19.911 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.912 00:42:19.912 real 0m24.152s 00:42:19.912 user 4m32.207s 00:42:19.912 sys 0m6.779s 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:19.912 16:48:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:19.912 ************************************ 00:42:19.912 END TEST fio_dif_rand_params 00:42:19.912 ************************************ 00:42:19.912 16:48:09 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:42:19.912 16:48:09 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:19.912 16:48:09 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:19.912 16:48:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:19.912 ************************************ 00:42:19.912 START TEST fio_dif_digest 00:42:19.912 ************************************ 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:19.912 bdev_null0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:19.912 [2024-11-19 16:48:09.916823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:19.912 { 00:42:19.912 "params": { 00:42:19.912 "name": "Nvme$subsystem", 00:42:19.912 "trtype": "$TEST_TRANSPORT", 00:42:19.912 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:19.912 "adrfam": "ipv4", 00:42:19.912 "trsvcid": "$NVMF_PORT", 00:42:19.912 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:19.912 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:19.912 "hdgst": ${hdgst:-false}, 00:42:19.912 "ddgst": ${ddgst:-false} 00:42:19.912 }, 00:42:19.912 "method": "bdev_nvme_attach_controller" 00:42:19.912 } 00:42:19.912 EOF 00:42:19.912 )") 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:19.912 "params": { 00:42:19.912 "name": "Nvme0", 00:42:19.912 "trtype": "tcp", 00:42:19.912 "traddr": "10.0.0.2", 00:42:19.912 "adrfam": "ipv4", 00:42:19.912 "trsvcid": "4420", 00:42:19.912 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:19.912 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:19.912 "hdgst": true, 00:42:19.912 "ddgst": true 00:42:19.912 }, 00:42:19.912 "method": "bdev_nvme_attach_controller" 00:42:19.912 }' 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:19.912 16:48:09 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:19.912 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:19.912 ... 00:42:19.912 fio-3.35 00:42:19.913 Starting 3 threads 00:42:32.226 00:42:32.226 filename0: (groupid=0, jobs=1): err= 0: pid=470186: Tue Nov 19 16:48:20 2024 00:42:32.226 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(244MiB/10047msec) 00:42:32.226 slat (nsec): min=3991, max=42733, avg=14822.01, stdev=3850.50 00:42:32.226 clat (usec): min=9389, max=53010, avg=15425.75, stdev=1517.97 00:42:32.226 lat (usec): min=9402, max=53024, avg=15440.58, stdev=1518.10 00:42:32.226 clat percentiles (usec): 00:42:32.226 | 1.00th=[12911], 5.00th=[13829], 10.00th=[14222], 20.00th=[14615], 00:42:32.226 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15533], 00:42:32.226 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16581], 95.00th=[17171], 00:42:32.226 | 99.00th=[17957], 99.50th=[18482], 99.90th=[47449], 99.95th=[53216], 00:42:32.226 | 99.99th=[53216] 00:42:32.226 bw ( KiB/s): min=24064, max=26112, per=32.22%, avg=24911.25, stdev=461.46, samples=20 00:42:32.226 iops : min= 188, max= 204, avg=194.60, stdev= 3.62, samples=20 00:42:32.226 lat (msec) : 10=0.05%, 20=99.79%, 50=0.10%, 100=0.05% 00:42:32.226 cpu : usr=93.72%, sys=5.76%, ctx=23, majf=0, minf=147 00:42:32.226 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.226 issued rwts: total=1949,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:32.226 filename0: (groupid=0, jobs=1): err= 0: pid=470187: Tue Nov 19 16:48:20 2024 00:42:32.226 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10047msec) 00:42:32.226 slat (nsec): min=4315, max=40272, avg=15951.46, stdev=4886.88 00:42:32.226 clat (usec): min=7981, max=54764, avg=15128.50, stdev=1620.81 00:42:32.226 lat (usec): min=7994, max=54777, avg=15144.45, stdev=1620.87 00:42:32.226 clat percentiles (usec): 00:42:32.226 | 1.00th=[12518], 5.00th=[13435], 10.00th=[13829], 20.00th=[14222], 00:42:32.226 | 30.00th=[14615], 40.00th=[14877], 50.00th=[15139], 60.00th=[15401], 00:42:32.226 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:42:32.226 | 99.00th=[17957], 99.50th=[17957], 99.90th=[47973], 99.95th=[54789], 00:42:32.226 | 99.99th=[54789] 00:42:32.226 bw ( KiB/s): min=24576, max=27136, per=32.85%, avg=25397.65, stdev=600.06, samples=20 00:42:32.226 iops : min= 192, max= 212, avg=198.40, stdev= 4.71, samples=20 00:42:32.226 lat (msec) : 10=0.55%, 20=99.35%, 50=0.05%, 100=0.05% 00:42:32.226 cpu : usr=92.90%, sys=6.25%, ctx=108, majf=0, minf=100 00:42:32.226 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.226 issued rwts: total=1987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:32.226 filename0: (groupid=0, jobs=1): err= 0: pid=470188: Tue Nov 19 16:48:20 2024 00:42:32.226 read: IOPS=212, BW=26.5MiB/s (27.8MB/s)(267MiB/10049msec) 00:42:32.226 slat (nsec): min=4798, max=83238, avg=14918.79, stdev=5048.61 00:42:32.226 clat (usec): min=11218, max=54732, avg=14093.74, stdev=2135.12 00:42:32.226 lat (usec): min=11230, max=54745, avg=14108.66, stdev=2134.97 00:42:32.226 clat percentiles (usec): 00:42:32.226 | 1.00th=[11863], 5.00th=[12387], 10.00th=[12780], 20.00th=[13173], 00:42:32.226 | 30.00th=[13435], 40.00th=[13829], 50.00th=[14091], 60.00th=[14222], 00:42:32.226 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15139], 95.00th=[15533], 00:42:32.226 | 99.00th=[16450], 99.50th=[16909], 99.90th=[53740], 99.95th=[54264], 00:42:32.226 | 99.99th=[54789] 00:42:32.226 bw ( KiB/s): min=25088, max=28416, per=35.28%, avg=27274.10, stdev=727.99, samples=20 00:42:32.226 iops : min= 196, max= 222, avg=213.05, stdev= 5.71, samples=20 00:42:32.226 lat (msec) : 20=99.77%, 100=0.23% 00:42:32.226 cpu : usr=92.46%, sys=7.04%, ctx=17, majf=0, minf=173 00:42:32.226 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:32.226 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.226 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:32.226 issued rwts: total=2133,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:32.226 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:32.226 00:42:32.226 Run status group 0 (all jobs): 00:42:32.226 READ: bw=75.5MiB/s (79.2MB/s), 24.2MiB/s-26.5MiB/s (25.4MB/s-27.8MB/s), io=759MiB (795MB), run=10047-10049msec 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:32.226 00:42:32.226 real 0m11.233s 00:42:32.226 user 0m29.099s 00:42:32.226 sys 0m2.192s 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:32.226 16:48:21 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:42:32.226 ************************************ 00:42:32.226 END TEST fio_dif_digest 00:42:32.227 ************************************ 00:42:32.227 16:48:21 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:42:32.227 16:48:21 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:32.227 rmmod nvme_tcp 00:42:32.227 rmmod nvme_fabrics 00:42:32.227 rmmod nvme_keyring 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 463476 ']' 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 463476 00:42:32.227 16:48:21 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 463476 ']' 00:42:32.227 16:48:21 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 463476 00:42:32.227 16:48:21 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:42:32.227 16:48:21 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:32.227 16:48:21 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 463476 00:42:32.227 16:48:21 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:32.227 16:48:21 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:32.227 16:48:21 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 463476' 00:42:32.227 killing process with pid 463476 00:42:32.227 16:48:21 nvmf_dif -- common/autotest_common.sh@973 -- # kill 463476 00:42:32.227 16:48:21 nvmf_dif -- common/autotest_common.sh@978 -- # wait 463476 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:42:32.227 16:48:21 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:32.227 Waiting for block devices as requested 00:42:32.227 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:32.488 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:32.488 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:32.488 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:32.746 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:32.746 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:32.746 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:32.746 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:33.004 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:33.004 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:33.004 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:33.004 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:33.261 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:33.261 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:33.261 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:33.519 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:33.519 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:33.519 16:48:23 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:33.519 16:48:23 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:33.519 16:48:23 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:42:33.519 16:48:23 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:42:33.519 16:48:23 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:33.519 16:48:23 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:42:33.519 16:48:23 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:33.519 16:48:23 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:33.519 16:48:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.519 16:48:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:33.519 16:48:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:36.063 16:48:25 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:36.063 00:42:36.063 real 1m6.654s 00:42:36.063 user 6m28.141s 00:42:36.063 sys 0m18.473s 00:42:36.063 16:48:25 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:36.063 16:48:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:36.063 ************************************ 00:42:36.063 END TEST nvmf_dif 00:42:36.063 ************************************ 00:42:36.064 16:48:25 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:36.064 16:48:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:36.064 16:48:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:36.064 16:48:25 -- common/autotest_common.sh@10 -- # set +x 00:42:36.064 ************************************ 00:42:36.064 START TEST nvmf_abort_qd_sizes 00:42:36.064 ************************************ 00:42:36.064 16:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:42:36.064 * Looking for test storage... 00:42:36.064 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:36.064 16:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:42:36.064 16:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:42:36.064 16:48:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:42:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.064 --rc genhtml_branch_coverage=1 00:42:36.064 --rc genhtml_function_coverage=1 00:42:36.064 --rc genhtml_legend=1 00:42:36.064 --rc geninfo_all_blocks=1 00:42:36.064 --rc geninfo_unexecuted_blocks=1 00:42:36.064 00:42:36.064 ' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:42:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.064 --rc genhtml_branch_coverage=1 00:42:36.064 --rc genhtml_function_coverage=1 00:42:36.064 --rc genhtml_legend=1 00:42:36.064 --rc geninfo_all_blocks=1 00:42:36.064 --rc geninfo_unexecuted_blocks=1 00:42:36.064 00:42:36.064 ' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:42:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.064 --rc genhtml_branch_coverage=1 00:42:36.064 --rc genhtml_function_coverage=1 00:42:36.064 --rc genhtml_legend=1 00:42:36.064 --rc geninfo_all_blocks=1 00:42:36.064 --rc geninfo_unexecuted_blocks=1 00:42:36.064 00:42:36.064 ' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:42:36.064 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:36.064 --rc genhtml_branch_coverage=1 00:42:36.064 --rc genhtml_function_coverage=1 00:42:36.064 --rc genhtml_legend=1 00:42:36.064 --rc geninfo_all_blocks=1 00:42:36.064 --rc geninfo_unexecuted_blocks=1 00:42:36.064 00:42:36.064 ' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:36.064 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:36.064 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:36.065 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:36.065 16:48:26 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:42:36.065 16:48:26 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:42:37.973 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:42:37.973 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:37.973 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:42:37.974 Found net devices under 0000:0a:00.0: cvl_0_0 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:42:37.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:37.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:37.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.301 ms 00:42:37.974 00:42:37.974 --- 10.0.0.2 ping statistics --- 00:42:37.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:37.974 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:37.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:37.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.076 ms 00:42:37.974 00:42:37.974 --- 10.0.0.1 ping statistics --- 00:42:37.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:37.974 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:37.974 16:48:28 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:39.354 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:39.354 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:39.354 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:39.354 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:39.354 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:39.354 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:39.354 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:39.354 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:39.354 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:42:39.354 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:42:39.354 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:42:39.354 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:42:39.354 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:42:39.354 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:42:39.354 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:42:39.354 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:42:40.288 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:42:40.288 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=474981 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 474981 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 474981 ']' 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:40.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:40.289 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:40.547 [2024-11-19 16:48:30.630298] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:42:40.547 [2024-11-19 16:48:30.630399] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:40.547 [2024-11-19 16:48:30.703246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:40.547 [2024-11-19 16:48:30.750327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:40.547 [2024-11-19 16:48:30.750397] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:40.547 [2024-11-19 16:48:30.750420] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:40.547 [2024-11-19 16:48:30.750430] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:40.547 [2024-11-19 16:48:30.750440] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:40.547 [2024-11-19 16:48:30.751871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:42:40.547 [2024-11-19 16:48:30.751937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:42:40.547 [2024-11-19 16:48:30.752003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:42:40.547 [2024-11-19 16:48:30.752006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:42:40.547 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:40.547 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:42:40.547 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:40.547 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:40.547 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:88:00.0 ]] 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:88:00.0 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:40.805 16:48:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:40.805 ************************************ 00:42:40.805 START TEST spdk_target_abort 00:42:40.805 ************************************ 00:42:40.805 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:42:40.805 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:42:40.805 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:42:40.805 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:40.805 16:48:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:44.110 spdk_targetn1 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:44.110 [2024-11-19 16:48:33.758392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:44.110 [2024-11-19 16:48:33.798738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:44.110 16:48:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:47.390 Initializing NVMe Controllers 00:42:47.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:47.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:47.391 Initialization complete. Launching workers. 00:42:47.391 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11733, failed: 0 00:42:47.391 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 10495 00:42:47.391 success 717, unsuccessful 521, failed 0 00:42:47.391 16:48:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:47.391 16:48:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:50.671 Initializing NVMe Controllers 00:42:50.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:50.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:50.671 Initialization complete. Launching workers. 00:42:50.671 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8699, failed: 0 00:42:50.671 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1233, failed to submit 7466 00:42:50.671 success 349, unsuccessful 884, failed 0 00:42:50.671 16:48:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:50.671 16:48:40 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:53.201 Initializing NVMe Controllers 00:42:53.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:42:53.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:42:53.201 Initialization complete. Launching workers. 00:42:53.201 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30940, failed: 0 00:42:53.201 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2617, failed to submit 28323 00:42:53.201 success 518, unsuccessful 2099, failed 0 00:42:53.201 16:48:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:42:53.201 16:48:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.201 16:48:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:53.201 16:48:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:53.201 16:48:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:42:53.201 16:48:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:53.201 16:48:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 474981 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 474981 ']' 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 474981 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 474981 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 474981' 00:42:54.573 killing process with pid 474981 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 474981 00:42:54.573 16:48:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 474981 00:42:54.832 00:42:54.832 real 0m14.144s 00:42:54.832 user 0m53.871s 00:42:54.832 sys 0m2.385s 00:42:54.832 16:48:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:54.832 16:48:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:42:54.832 ************************************ 00:42:54.832 END TEST spdk_target_abort 00:42:54.832 ************************************ 00:42:54.832 16:48:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:42:54.832 16:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:54.832 16:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:54.832 16:48:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:42:54.832 ************************************ 00:42:54.832 START TEST kernel_target_abort 00:42:54.832 ************************************ 00:42:54.832 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:42:54.832 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:42:54.832 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:42:54.832 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:42:54.832 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:42:54.833 16:48:45 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:42:56.208 Waiting for block devices as requested 00:42:56.208 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:42:56.208 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:56.469 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:56.469 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:56.469 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:56.469 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:56.730 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:56.730 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:56.730 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:56.730 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:42:56.990 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:42:56.990 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:42:56.990 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:42:57.249 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:42:57.249 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:42:57.249 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:42:57.249 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:42:57.507 No valid GPT data, bailing 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:42:57.507 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:42:57.508 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:42:57.766 00:42:57.766 Discovery Log Number of Records 2, Generation counter 2 00:42:57.766 =====Discovery Log Entry 0====== 00:42:57.766 trtype: tcp 00:42:57.766 adrfam: ipv4 00:42:57.766 subtype: current discovery subsystem 00:42:57.766 treq: not specified, sq flow control disable supported 00:42:57.766 portid: 1 00:42:57.766 trsvcid: 4420 00:42:57.766 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:42:57.766 traddr: 10.0.0.1 00:42:57.766 eflags: none 00:42:57.766 sectype: none 00:42:57.766 =====Discovery Log Entry 1====== 00:42:57.766 trtype: tcp 00:42:57.766 adrfam: ipv4 00:42:57.766 subtype: nvme subsystem 00:42:57.766 treq: not specified, sq flow control disable supported 00:42:57.766 portid: 1 00:42:57.766 trsvcid: 4420 00:42:57.766 subnqn: nqn.2016-06.io.spdk:testnqn 00:42:57.766 traddr: 10.0.0.1 00:42:57.766 eflags: none 00:42:57.766 sectype: none 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:42:57.766 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:42:57.767 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:42:57.767 16:48:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:01.044 Initializing NVMe Controllers 00:43:01.044 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:01.044 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:01.044 Initialization complete. Launching workers. 00:43:01.044 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55424, failed: 0 00:43:01.044 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 55424, failed to submit 0 00:43:01.044 success 0, unsuccessful 55424, failed 0 00:43:01.044 16:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:01.044 16:48:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:04.328 Initializing NVMe Controllers 00:43:04.328 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:04.328 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:04.328 Initialization complete. Launching workers. 00:43:04.328 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 99101, failed: 0 00:43:04.328 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24982, failed to submit 74119 00:43:04.328 success 0, unsuccessful 24982, failed 0 00:43:04.328 16:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:04.328 16:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:07.611 Initializing NVMe Controllers 00:43:07.611 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:07.611 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:07.611 Initialization complete. Launching workers. 00:43:07.611 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 98253, failed: 0 00:43:07.611 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24562, failed to submit 73691 00:43:07.611 success 0, unsuccessful 24562, failed 0 00:43:07.611 16:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:43:07.611 16:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:43:07.611 16:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:43:07.611 16:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:07.611 16:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:07.611 16:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:43:07.611 16:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:07.611 16:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:43:07.611 16:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:43:07.611 16:48:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:08.177 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:43:08.177 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:43:08.177 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:43:08.177 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:43:08.177 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:43:08.177 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:43:08.177 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:43:08.177 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:43:08.177 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:43:08.177 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:43:08.177 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:43:08.177 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:43:08.177 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:43:08.177 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:43:08.177 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:43:08.177 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:43:09.114 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:43:09.374 00:43:09.374 real 0m14.397s 00:43:09.374 user 0m6.620s 00:43:09.374 sys 0m3.273s 00:43:09.374 16:48:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:09.374 16:48:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:09.374 ************************************ 00:43:09.374 END TEST kernel_target_abort 00:43:09.374 ************************************ 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:09.374 rmmod nvme_tcp 00:43:09.374 rmmod nvme_fabrics 00:43:09.374 rmmod nvme_keyring 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 474981 ']' 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 474981 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 474981 ']' 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 474981 00:43:09.374 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (474981) - No such process 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 474981 is not found' 00:43:09.374 Process with pid 474981 is not found 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:09.374 16:48:59 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:10.310 Waiting for block devices as requested 00:43:10.569 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:43:10.569 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:43:10.829 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:43:10.829 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:43:10.829 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:43:10.829 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:43:11.090 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:43:11.090 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:43:11.090 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:43:11.348 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:43:11.348 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:43:11.348 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:43:11.348 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:43:11.608 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:43:11.608 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:43:11.608 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:43:11.608 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:11.867 16:49:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:13.770 16:49:04 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:14.029 00:43:14.029 real 0m38.207s 00:43:14.029 user 1m2.729s 00:43:14.029 sys 0m9.200s 00:43:14.029 16:49:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:14.029 16:49:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:14.029 ************************************ 00:43:14.029 END TEST nvmf_abort_qd_sizes 00:43:14.029 ************************************ 00:43:14.029 16:49:04 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:14.029 16:49:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:14.029 16:49:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:14.029 16:49:04 -- common/autotest_common.sh@10 -- # set +x 00:43:14.029 ************************************ 00:43:14.029 START TEST keyring_file 00:43:14.029 ************************************ 00:43:14.029 16:49:04 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:43:14.029 * Looking for test storage... 00:43:14.029 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:14.029 16:49:04 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:14.029 16:49:04 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:43:14.029 16:49:04 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:14.029 16:49:04 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@345 -- # : 1 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@353 -- # local d=1 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@355 -- # echo 1 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@353 -- # local d=2 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@355 -- # echo 2 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:14.029 16:49:04 keyring_file -- scripts/common.sh@368 -- # return 0 00:43:14.029 16:49:04 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:14.029 16:49:04 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:14.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:14.029 --rc genhtml_branch_coverage=1 00:43:14.029 --rc genhtml_function_coverage=1 00:43:14.029 --rc genhtml_legend=1 00:43:14.029 --rc geninfo_all_blocks=1 00:43:14.029 --rc geninfo_unexecuted_blocks=1 00:43:14.029 00:43:14.029 ' 00:43:14.029 16:49:04 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:14.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:14.030 --rc genhtml_branch_coverage=1 00:43:14.030 --rc genhtml_function_coverage=1 00:43:14.030 --rc genhtml_legend=1 00:43:14.030 --rc geninfo_all_blocks=1 00:43:14.030 --rc geninfo_unexecuted_blocks=1 00:43:14.030 00:43:14.030 ' 00:43:14.030 16:49:04 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:14.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:14.030 --rc genhtml_branch_coverage=1 00:43:14.030 --rc genhtml_function_coverage=1 00:43:14.030 --rc genhtml_legend=1 00:43:14.030 --rc geninfo_all_blocks=1 00:43:14.030 --rc geninfo_unexecuted_blocks=1 00:43:14.030 00:43:14.030 ' 00:43:14.030 16:49:04 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:14.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:14.030 --rc genhtml_branch_coverage=1 00:43:14.030 --rc genhtml_function_coverage=1 00:43:14.030 --rc genhtml_legend=1 00:43:14.030 --rc geninfo_all_blocks=1 00:43:14.030 --rc geninfo_unexecuted_blocks=1 00:43:14.030 00:43:14.030 ' 00:43:14.030 16:49:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:14.030 16:49:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:14.030 16:49:04 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:43:14.030 16:49:04 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:14.030 16:49:04 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:14.030 16:49:04 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:14.030 16:49:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.030 16:49:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.030 16:49:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.030 16:49:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:43:14.030 16:49:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@51 -- # : 0 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:14.030 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:14.030 16:49:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:14.030 16:49:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:14.030 16:49:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:14.030 16:49:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:43:14.030 16:49:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:43:14.030 16:49:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:43:14.030 16:49:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:14.030 16:49:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:14.030 16:49:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:14.030 16:49:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:14.030 16:49:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:14.030 16:49:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:14.030 16:49:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.eWGh8eEM26 00:43:14.030 16:49:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:14.030 16:49:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.eWGh8eEM26 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.eWGh8eEM26 00:43:14.289 16:49:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.eWGh8eEM26 00:43:14.289 16:49:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.U6qykMjaUJ 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:14.289 16:49:04 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:14.289 16:49:04 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:14.289 16:49:04 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:14.289 16:49:04 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:14.289 16:49:04 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:14.289 16:49:04 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.U6qykMjaUJ 00:43:14.289 16:49:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.U6qykMjaUJ 00:43:14.289 16:49:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.U6qykMjaUJ 00:43:14.289 16:49:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=480741 00:43:14.289 16:49:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:14.290 16:49:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 480741 00:43:14.290 16:49:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 480741 ']' 00:43:14.290 16:49:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:14.290 16:49:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:14.290 16:49:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:14.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:14.290 16:49:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:14.290 16:49:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:14.290 [2024-11-19 16:49:04.475687] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:43:14.290 [2024-11-19 16:49:04.475787] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480741 ] 00:43:14.290 [2024-11-19 16:49:04.542538] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:14.290 [2024-11-19 16:49:04.589004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:14.548 16:49:04 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:14.548 16:49:04 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:14.548 16:49:04 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:43:14.548 16:49:04 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.548 16:49:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:14.548 [2024-11-19 16:49:04.830709] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:14.548 null0 00:43:14.548 [2024-11-19 16:49:04.862766] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:14.548 [2024-11-19 16:49:04.863268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:14.548 16:49:04 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:14.548 16:49:04 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:14.548 16:49:04 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:14.548 16:49:04 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:14.548 16:49:04 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:14.806 [2024-11-19 16:49:04.890838] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:43:14.806 request: 00:43:14.806 { 00:43:14.806 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:43:14.806 "secure_channel": false, 00:43:14.806 "listen_address": { 00:43:14.806 "trtype": "tcp", 00:43:14.806 "traddr": "127.0.0.1", 00:43:14.806 "trsvcid": "4420" 00:43:14.806 }, 00:43:14.806 "method": "nvmf_subsystem_add_listener", 00:43:14.806 "req_id": 1 00:43:14.806 } 00:43:14.806 Got JSON-RPC error response 00:43:14.806 response: 00:43:14.806 { 00:43:14.806 "code": -32602, 00:43:14.806 "message": "Invalid parameters" 00:43:14.806 } 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:14.806 16:49:04 keyring_file -- keyring/file.sh@47 -- # bperfpid=480754 00:43:14.806 16:49:04 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:43:14.806 16:49:04 keyring_file -- keyring/file.sh@49 -- # waitforlisten 480754 /var/tmp/bperf.sock 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 480754 ']' 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:14.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:14.806 16:49:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:14.806 [2024-11-19 16:49:04.937537] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:43:14.806 [2024-11-19 16:49:04.937616] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid480754 ] 00:43:14.806 [2024-11-19 16:49:05.002049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:14.806 [2024-11-19 16:49:05.046572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:15.064 16:49:05 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:15.064 16:49:05 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:15.064 16:49:05 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eWGh8eEM26 00:43:15.064 16:49:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eWGh8eEM26 00:43:15.321 16:49:05 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.U6qykMjaUJ 00:43:15.321 16:49:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.U6qykMjaUJ 00:43:15.579 16:49:05 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:43:15.579 16:49:05 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:43:15.579 16:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:15.579 16:49:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:15.579 16:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:15.837 16:49:05 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.eWGh8eEM26 == \/\t\m\p\/\t\m\p\.\e\W\G\h\8\e\E\M\2\6 ]] 00:43:15.837 16:49:05 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:43:15.837 16:49:05 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:43:15.837 16:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:15.837 16:49:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:15.837 16:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:16.096 16:49:06 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.U6qykMjaUJ == \/\t\m\p\/\t\m\p\.\U\6\q\y\k\M\j\a\U\J ]] 00:43:16.096 16:49:06 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:43:16.096 16:49:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:16.096 16:49:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:16.096 16:49:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:16.096 16:49:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:16.096 16:49:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:16.354 16:49:06 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:43:16.354 16:49:06 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:43:16.354 16:49:06 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:16.354 16:49:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:16.354 16:49:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:16.354 16:49:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:16.354 16:49:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:16.611 16:49:06 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:43:16.611 16:49:06 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:16.611 16:49:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:16.869 [2024-11-19 16:49:07.068020] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:16.869 nvme0n1 00:43:16.869 16:49:07 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:43:16.869 16:49:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:16.869 16:49:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:16.869 16:49:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:16.869 16:49:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:16.869 16:49:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:17.127 16:49:07 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:43:17.127 16:49:07 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:43:17.127 16:49:07 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:17.127 16:49:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:17.127 16:49:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:17.127 16:49:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:17.127 16:49:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:17.385 16:49:07 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:43:17.385 16:49:07 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:17.641 Running I/O for 1 seconds... 00:43:18.573 10383.00 IOPS, 40.56 MiB/s 00:43:18.573 Latency(us) 00:43:18.573 [2024-11-19T15:49:08.912Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:18.573 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:43:18.573 nvme0n1 : 1.01 10438.08 40.77 0.00 0.00 12227.12 5461.33 23981.32 00:43:18.573 [2024-11-19T15:49:08.912Z] =================================================================================================================== 00:43:18.573 [2024-11-19T15:49:08.912Z] Total : 10438.08 40.77 0.00 0.00 12227.12 5461.33 23981.32 00:43:18.573 { 00:43:18.573 "results": [ 00:43:18.573 { 00:43:18.573 "job": "nvme0n1", 00:43:18.573 "core_mask": "0x2", 00:43:18.573 "workload": "randrw", 00:43:18.573 "percentage": 50, 00:43:18.573 "status": "finished", 00:43:18.573 "queue_depth": 128, 00:43:18.574 "io_size": 4096, 00:43:18.574 "runtime": 1.007178, 00:43:18.574 "iops": 10438.075494103326, 00:43:18.574 "mibps": 40.77373239884112, 00:43:18.574 "io_failed": 0, 00:43:18.574 "io_timeout": 0, 00:43:18.574 "avg_latency_us": 12227.11591983118, 00:43:18.574 "min_latency_us": 5461.333333333333, 00:43:18.574 "max_latency_us": 23981.321481481482 00:43:18.574 } 00:43:18.574 ], 00:43:18.574 "core_count": 1 00:43:18.574 } 00:43:18.574 16:49:08 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:18.574 16:49:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:18.832 16:49:09 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:43:18.832 16:49:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:18.832 16:49:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:18.832 16:49:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:18.832 16:49:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:18.832 16:49:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:19.089 16:49:09 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:43:19.089 16:49:09 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:43:19.089 16:49:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:19.089 16:49:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:19.089 16:49:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:19.089 16:49:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:19.089 16:49:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:19.346 16:49:09 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:43:19.346 16:49:09 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:19.346 16:49:09 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:19.346 16:49:09 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:19.346 16:49:09 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:19.346 16:49:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:19.346 16:49:09 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:19.346 16:49:09 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:19.346 16:49:09 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:19.346 16:49:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:43:19.911 [2024-11-19 16:49:09.941401] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:19.911 [2024-11-19 16:49:09.942151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecbc90 (107): Transport endpoint is not connected 00:43:19.911 [2024-11-19 16:49:09.943143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ecbc90 (9): Bad file descriptor 00:43:19.911 [2024-11-19 16:49:09.944141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:19.911 [2024-11-19 16:49:09.944167] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:19.911 [2024-11-19 16:49:09.944181] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:19.911 [2024-11-19 16:49:09.944196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:19.911 request: 00:43:19.911 { 00:43:19.911 "name": "nvme0", 00:43:19.911 "trtype": "tcp", 00:43:19.911 "traddr": "127.0.0.1", 00:43:19.911 "adrfam": "ipv4", 00:43:19.911 "trsvcid": "4420", 00:43:19.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:19.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:19.911 "prchk_reftag": false, 00:43:19.911 "prchk_guard": false, 00:43:19.911 "hdgst": false, 00:43:19.911 "ddgst": false, 00:43:19.911 "psk": "key1", 00:43:19.911 "allow_unrecognized_csi": false, 00:43:19.911 "method": "bdev_nvme_attach_controller", 00:43:19.911 "req_id": 1 00:43:19.911 } 00:43:19.911 Got JSON-RPC error response 00:43:19.911 response: 00:43:19.911 { 00:43:19.911 "code": -5, 00:43:19.911 "message": "Input/output error" 00:43:19.911 } 00:43:19.911 16:49:09 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:19.911 16:49:09 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:19.911 16:49:09 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:19.911 16:49:09 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:19.911 16:49:09 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:43:19.911 16:49:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:19.911 16:49:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:19.911 16:49:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:19.911 16:49:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:19.911 16:49:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:20.169 16:49:10 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:43:20.169 16:49:10 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:43:20.169 16:49:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:20.169 16:49:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:20.169 16:49:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:20.169 16:49:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:20.169 16:49:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:20.427 16:49:10 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:43:20.427 16:49:10 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:43:20.427 16:49:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:20.684 16:49:10 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:43:20.684 16:49:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:43:20.942 16:49:11 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:43:20.942 16:49:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:20.942 16:49:11 keyring_file -- keyring/file.sh@78 -- # jq length 00:43:21.199 16:49:11 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:43:21.199 16:49:11 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.eWGh8eEM26 00:43:21.199 16:49:11 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.eWGh8eEM26 00:43:21.199 16:49:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:21.199 16:49:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.eWGh8eEM26 00:43:21.199 16:49:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:21.199 16:49:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:21.199 16:49:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:21.199 16:49:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:21.199 16:49:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eWGh8eEM26 00:43:21.200 16:49:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eWGh8eEM26 00:43:21.458 [2024-11-19 16:49:11.621181] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.eWGh8eEM26': 0100660 00:43:21.458 [2024-11-19 16:49:11.621215] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:43:21.458 request: 00:43:21.458 { 00:43:21.458 "name": "key0", 00:43:21.458 "path": "/tmp/tmp.eWGh8eEM26", 00:43:21.458 "method": "keyring_file_add_key", 00:43:21.458 "req_id": 1 00:43:21.458 } 00:43:21.458 Got JSON-RPC error response 00:43:21.458 response: 00:43:21.458 { 00:43:21.458 "code": -1, 00:43:21.458 "message": "Operation not permitted" 00:43:21.458 } 00:43:21.458 16:49:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:21.458 16:49:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:21.458 16:49:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:21.458 16:49:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:21.458 16:49:11 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.eWGh8eEM26 00:43:21.458 16:49:11 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.eWGh8eEM26 00:43:21.458 16:49:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.eWGh8eEM26 00:43:21.716 16:49:11 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.eWGh8eEM26 00:43:21.716 16:49:11 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:43:21.716 16:49:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:21.716 16:49:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:21.716 16:49:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:21.716 16:49:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:21.716 16:49:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:21.974 16:49:12 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:43:21.974 16:49:12 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:21.974 16:49:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:43:21.974 16:49:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:21.974 16:49:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:21.974 16:49:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:21.974 16:49:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:21.974 16:49:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:21.974 16:49:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:21.974 16:49:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:22.232 [2024-11-19 16:49:12.447480] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.eWGh8eEM26': No such file or directory 00:43:22.232 [2024-11-19 16:49:12.447519] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:43:22.232 [2024-11-19 16:49:12.447558] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:43:22.232 [2024-11-19 16:49:12.447571] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:43:22.232 [2024-11-19 16:49:12.447583] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:43:22.232 [2024-11-19 16:49:12.447595] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:43:22.232 request: 00:43:22.232 { 00:43:22.232 "name": "nvme0", 00:43:22.232 "trtype": "tcp", 00:43:22.232 "traddr": "127.0.0.1", 00:43:22.232 "adrfam": "ipv4", 00:43:22.232 "trsvcid": "4420", 00:43:22.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:22.232 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:22.232 "prchk_reftag": false, 00:43:22.232 "prchk_guard": false, 00:43:22.232 "hdgst": false, 00:43:22.232 "ddgst": false, 00:43:22.232 "psk": "key0", 00:43:22.232 "allow_unrecognized_csi": false, 00:43:22.232 "method": "bdev_nvme_attach_controller", 00:43:22.232 "req_id": 1 00:43:22.232 } 00:43:22.232 Got JSON-RPC error response 00:43:22.232 response: 00:43:22.232 { 00:43:22.232 "code": -19, 00:43:22.232 "message": "No such device" 00:43:22.232 } 00:43:22.232 16:49:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:43:22.232 16:49:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:22.232 16:49:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:22.232 16:49:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:22.232 16:49:12 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:43:22.232 16:49:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:22.491 16:49:12 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:43:22.491 16:49:12 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:43:22.491 16:49:12 keyring_file -- keyring/common.sh@17 -- # name=key0 00:43:22.491 16:49:12 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:22.491 16:49:12 keyring_file -- keyring/common.sh@17 -- # digest=0 00:43:22.491 16:49:12 keyring_file -- keyring/common.sh@18 -- # mktemp 00:43:22.491 16:49:12 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.2GLOe2Jw1F 00:43:22.491 16:49:12 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:22.491 16:49:12 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:22.491 16:49:12 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:43:22.491 16:49:12 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:22.491 16:49:12 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:22.491 16:49:12 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:43:22.491 16:49:12 keyring_file -- nvmf/common.sh@733 -- # python - 00:43:22.491 16:49:12 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.2GLOe2Jw1F 00:43:22.491 16:49:12 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.2GLOe2Jw1F 00:43:22.491 16:49:12 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.2GLOe2Jw1F 00:43:22.491 16:49:12 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2GLOe2Jw1F 00:43:22.491 16:49:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2GLOe2Jw1F 00:43:22.749 16:49:13 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:22.749 16:49:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:23.315 nvme0n1 00:43:23.315 16:49:13 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:43:23.315 16:49:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:23.315 16:49:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:23.315 16:49:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:23.315 16:49:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:23.315 16:49:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:23.573 16:49:13 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:43:23.573 16:49:13 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:43:23.573 16:49:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:43:23.831 16:49:13 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:43:23.831 16:49:13 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:43:23.831 16:49:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:23.831 16:49:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:23.831 16:49:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:24.089 16:49:14 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:43:24.089 16:49:14 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:43:24.089 16:49:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:24.089 16:49:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:24.089 16:49:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:24.089 16:49:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:24.089 16:49:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:24.347 16:49:14 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:43:24.347 16:49:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:24.347 16:49:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:24.605 16:49:14 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:43:24.605 16:49:14 keyring_file -- keyring/file.sh@105 -- # jq length 00:43:24.605 16:49:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:24.863 16:49:15 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:43:24.863 16:49:15 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.2GLOe2Jw1F 00:43:24.863 16:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.2GLOe2Jw1F 00:43:25.121 16:49:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.U6qykMjaUJ 00:43:25.121 16:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.U6qykMjaUJ 00:43:25.379 16:49:15 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:25.379 16:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:43:25.637 nvme0n1 00:43:25.637 16:49:15 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:43:25.637 16:49:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:43:26.203 16:49:16 keyring_file -- keyring/file.sh@113 -- # config='{ 00:43:26.203 "subsystems": [ 00:43:26.203 { 00:43:26.203 "subsystem": "keyring", 00:43:26.203 "config": [ 00:43:26.203 { 00:43:26.203 "method": "keyring_file_add_key", 00:43:26.203 "params": { 00:43:26.203 "name": "key0", 00:43:26.203 "path": "/tmp/tmp.2GLOe2Jw1F" 00:43:26.203 } 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "method": "keyring_file_add_key", 00:43:26.203 "params": { 00:43:26.203 "name": "key1", 00:43:26.203 "path": "/tmp/tmp.U6qykMjaUJ" 00:43:26.203 } 00:43:26.203 } 00:43:26.203 ] 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "subsystem": "iobuf", 00:43:26.203 "config": [ 00:43:26.203 { 00:43:26.203 "method": "iobuf_set_options", 00:43:26.203 "params": { 00:43:26.203 "small_pool_count": 8192, 00:43:26.203 "large_pool_count": 1024, 00:43:26.203 "small_bufsize": 8192, 00:43:26.203 "large_bufsize": 135168, 00:43:26.203 "enable_numa": false 00:43:26.203 } 00:43:26.203 } 00:43:26.203 ] 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "subsystem": "sock", 00:43:26.203 "config": [ 00:43:26.203 { 00:43:26.203 "method": "sock_set_default_impl", 00:43:26.203 "params": { 00:43:26.203 "impl_name": "posix" 00:43:26.203 } 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "method": "sock_impl_set_options", 00:43:26.203 "params": { 00:43:26.203 "impl_name": "ssl", 00:43:26.203 "recv_buf_size": 4096, 00:43:26.203 "send_buf_size": 4096, 00:43:26.203 "enable_recv_pipe": true, 00:43:26.203 "enable_quickack": false, 00:43:26.203 "enable_placement_id": 0, 00:43:26.203 "enable_zerocopy_send_server": true, 00:43:26.203 "enable_zerocopy_send_client": false, 00:43:26.203 "zerocopy_threshold": 0, 00:43:26.203 "tls_version": 0, 00:43:26.203 "enable_ktls": false 00:43:26.203 } 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "method": "sock_impl_set_options", 00:43:26.203 "params": { 00:43:26.203 "impl_name": "posix", 00:43:26.203 "recv_buf_size": 2097152, 00:43:26.203 "send_buf_size": 2097152, 00:43:26.203 "enable_recv_pipe": true, 00:43:26.203 "enable_quickack": false, 00:43:26.203 "enable_placement_id": 0, 00:43:26.203 "enable_zerocopy_send_server": true, 00:43:26.203 "enable_zerocopy_send_client": false, 00:43:26.203 "zerocopy_threshold": 0, 00:43:26.203 "tls_version": 0, 00:43:26.203 "enable_ktls": false 00:43:26.203 } 00:43:26.203 } 00:43:26.203 ] 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "subsystem": "vmd", 00:43:26.203 "config": [] 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "subsystem": "accel", 00:43:26.203 "config": [ 00:43:26.203 { 00:43:26.203 "method": "accel_set_options", 00:43:26.203 "params": { 00:43:26.203 "small_cache_size": 128, 00:43:26.203 "large_cache_size": 16, 00:43:26.203 "task_count": 2048, 00:43:26.203 "sequence_count": 2048, 00:43:26.203 "buf_count": 2048 00:43:26.203 } 00:43:26.203 } 00:43:26.203 ] 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "subsystem": "bdev", 00:43:26.203 "config": [ 00:43:26.203 { 00:43:26.203 "method": "bdev_set_options", 00:43:26.203 "params": { 00:43:26.203 "bdev_io_pool_size": 65535, 00:43:26.203 "bdev_io_cache_size": 256, 00:43:26.203 "bdev_auto_examine": true, 00:43:26.203 "iobuf_small_cache_size": 128, 00:43:26.203 "iobuf_large_cache_size": 16 00:43:26.203 } 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "method": "bdev_raid_set_options", 00:43:26.203 "params": { 00:43:26.203 "process_window_size_kb": 1024, 00:43:26.203 "process_max_bandwidth_mb_sec": 0 00:43:26.203 } 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "method": "bdev_iscsi_set_options", 00:43:26.203 "params": { 00:43:26.203 "timeout_sec": 30 00:43:26.203 } 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "method": "bdev_nvme_set_options", 00:43:26.203 "params": { 00:43:26.203 "action_on_timeout": "none", 00:43:26.203 "timeout_us": 0, 00:43:26.203 "timeout_admin_us": 0, 00:43:26.203 "keep_alive_timeout_ms": 10000, 00:43:26.203 "arbitration_burst": 0, 00:43:26.203 "low_priority_weight": 0, 00:43:26.203 "medium_priority_weight": 0, 00:43:26.203 "high_priority_weight": 0, 00:43:26.203 "nvme_adminq_poll_period_us": 10000, 00:43:26.203 "nvme_ioq_poll_period_us": 0, 00:43:26.203 "io_queue_requests": 512, 00:43:26.203 "delay_cmd_submit": true, 00:43:26.203 "transport_retry_count": 4, 00:43:26.203 "bdev_retry_count": 3, 00:43:26.203 "transport_ack_timeout": 0, 00:43:26.203 "ctrlr_loss_timeout_sec": 0, 00:43:26.203 "reconnect_delay_sec": 0, 00:43:26.203 "fast_io_fail_timeout_sec": 0, 00:43:26.203 "disable_auto_failback": false, 00:43:26.203 "generate_uuids": false, 00:43:26.203 "transport_tos": 0, 00:43:26.203 "nvme_error_stat": false, 00:43:26.203 "rdma_srq_size": 0, 00:43:26.203 "io_path_stat": false, 00:43:26.203 "allow_accel_sequence": false, 00:43:26.203 "rdma_max_cq_size": 0, 00:43:26.203 "rdma_cm_event_timeout_ms": 0, 00:43:26.203 "dhchap_digests": [ 00:43:26.203 "sha256", 00:43:26.203 "sha384", 00:43:26.203 "sha512" 00:43:26.203 ], 00:43:26.203 "dhchap_dhgroups": [ 00:43:26.203 "null", 00:43:26.203 "ffdhe2048", 00:43:26.203 "ffdhe3072", 00:43:26.203 "ffdhe4096", 00:43:26.203 "ffdhe6144", 00:43:26.203 "ffdhe8192" 00:43:26.203 ] 00:43:26.203 } 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "method": "bdev_nvme_attach_controller", 00:43:26.203 "params": { 00:43:26.203 "name": "nvme0", 00:43:26.203 "trtype": "TCP", 00:43:26.203 "adrfam": "IPv4", 00:43:26.203 "traddr": "127.0.0.1", 00:43:26.203 "trsvcid": "4420", 00:43:26.203 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:26.203 "prchk_reftag": false, 00:43:26.203 "prchk_guard": false, 00:43:26.203 "ctrlr_loss_timeout_sec": 0, 00:43:26.203 "reconnect_delay_sec": 0, 00:43:26.203 "fast_io_fail_timeout_sec": 0, 00:43:26.203 "psk": "key0", 00:43:26.203 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:26.203 "hdgst": false, 00:43:26.203 "ddgst": false, 00:43:26.203 "multipath": "multipath" 00:43:26.203 } 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "method": "bdev_nvme_set_hotplug", 00:43:26.203 "params": { 00:43:26.203 "period_us": 100000, 00:43:26.203 "enable": false 00:43:26.203 } 00:43:26.203 }, 00:43:26.203 { 00:43:26.203 "method": "bdev_wait_for_examine" 00:43:26.203 } 00:43:26.203 ] 00:43:26.203 }, 00:43:26.203 { 00:43:26.204 "subsystem": "nbd", 00:43:26.204 "config": [] 00:43:26.204 } 00:43:26.204 ] 00:43:26.204 }' 00:43:26.204 16:49:16 keyring_file -- keyring/file.sh@115 -- # killprocess 480754 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 480754 ']' 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@958 -- # kill -0 480754 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 480754 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 480754' 00:43:26.204 killing process with pid 480754 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@973 -- # kill 480754 00:43:26.204 Received shutdown signal, test time was about 1.000000 seconds 00:43:26.204 00:43:26.204 Latency(us) 00:43:26.204 [2024-11-19T15:49:16.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:26.204 [2024-11-19T15:49:16.543Z] =================================================================================================================== 00:43:26.204 [2024-11-19T15:49:16.543Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@978 -- # wait 480754 00:43:26.204 16:49:16 keyring_file -- keyring/file.sh@118 -- # bperfpid=482218 00:43:26.204 16:49:16 keyring_file -- keyring/file.sh@120 -- # waitforlisten 482218 /var/tmp/bperf.sock 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 482218 ']' 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:26.204 16:49:16 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:26.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:26.204 16:49:16 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:26.204 16:49:16 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:43:26.204 "subsystems": [ 00:43:26.204 { 00:43:26.204 "subsystem": "keyring", 00:43:26.204 "config": [ 00:43:26.204 { 00:43:26.204 "method": "keyring_file_add_key", 00:43:26.204 "params": { 00:43:26.204 "name": "key0", 00:43:26.204 "path": "/tmp/tmp.2GLOe2Jw1F" 00:43:26.204 } 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "method": "keyring_file_add_key", 00:43:26.204 "params": { 00:43:26.204 "name": "key1", 00:43:26.204 "path": "/tmp/tmp.U6qykMjaUJ" 00:43:26.204 } 00:43:26.204 } 00:43:26.204 ] 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "subsystem": "iobuf", 00:43:26.204 "config": [ 00:43:26.204 { 00:43:26.204 "method": "iobuf_set_options", 00:43:26.204 "params": { 00:43:26.204 "small_pool_count": 8192, 00:43:26.204 "large_pool_count": 1024, 00:43:26.204 "small_bufsize": 8192, 00:43:26.204 "large_bufsize": 135168, 00:43:26.204 "enable_numa": false 00:43:26.204 } 00:43:26.204 } 00:43:26.204 ] 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "subsystem": "sock", 00:43:26.204 "config": [ 00:43:26.204 { 00:43:26.204 "method": "sock_set_default_impl", 00:43:26.204 "params": { 00:43:26.204 "impl_name": "posix" 00:43:26.204 } 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "method": "sock_impl_set_options", 00:43:26.204 "params": { 00:43:26.204 "impl_name": "ssl", 00:43:26.204 "recv_buf_size": 4096, 00:43:26.204 "send_buf_size": 4096, 00:43:26.204 "enable_recv_pipe": true, 00:43:26.204 "enable_quickack": false, 00:43:26.204 "enable_placement_id": 0, 00:43:26.204 "enable_zerocopy_send_server": true, 00:43:26.204 "enable_zerocopy_send_client": false, 00:43:26.204 "zerocopy_threshold": 0, 00:43:26.204 "tls_version": 0, 00:43:26.204 "enable_ktls": false 00:43:26.204 } 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "method": "sock_impl_set_options", 00:43:26.204 "params": { 00:43:26.204 "impl_name": "posix", 00:43:26.204 "recv_buf_size": 2097152, 00:43:26.204 "send_buf_size": 2097152, 00:43:26.204 "enable_recv_pipe": true, 00:43:26.204 "enable_quickack": false, 00:43:26.204 "enable_placement_id": 0, 00:43:26.204 "enable_zerocopy_send_server": true, 00:43:26.204 "enable_zerocopy_send_client": false, 00:43:26.204 "zerocopy_threshold": 0, 00:43:26.204 "tls_version": 0, 00:43:26.204 "enable_ktls": false 00:43:26.204 } 00:43:26.204 } 00:43:26.204 ] 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "subsystem": "vmd", 00:43:26.204 "config": [] 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "subsystem": "accel", 00:43:26.204 "config": [ 00:43:26.204 { 00:43:26.204 "method": "accel_set_options", 00:43:26.204 "params": { 00:43:26.204 "small_cache_size": 128, 00:43:26.204 "large_cache_size": 16, 00:43:26.204 "task_count": 2048, 00:43:26.204 "sequence_count": 2048, 00:43:26.204 "buf_count": 2048 00:43:26.204 } 00:43:26.204 } 00:43:26.204 ] 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "subsystem": "bdev", 00:43:26.204 "config": [ 00:43:26.204 { 00:43:26.204 "method": "bdev_set_options", 00:43:26.204 "params": { 00:43:26.204 "bdev_io_pool_size": 65535, 00:43:26.204 "bdev_io_cache_size": 256, 00:43:26.204 "bdev_auto_examine": true, 00:43:26.204 "iobuf_small_cache_size": 128, 00:43:26.204 "iobuf_large_cache_size": 16 00:43:26.204 } 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "method": "bdev_raid_set_options", 00:43:26.204 "params": { 00:43:26.204 "process_window_size_kb": 1024, 00:43:26.204 "process_max_bandwidth_mb_sec": 0 00:43:26.204 } 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "method": "bdev_iscsi_set_options", 00:43:26.204 "params": { 00:43:26.204 "timeout_sec": 30 00:43:26.204 } 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "method": "bdev_nvme_set_options", 00:43:26.204 "params": { 00:43:26.204 "action_on_timeout": "none", 00:43:26.204 "timeout_us": 0, 00:43:26.204 "timeout_admin_us": 0, 00:43:26.204 "keep_alive_timeout_ms": 10000, 00:43:26.204 "arbitration_burst": 0, 00:43:26.204 "low_priority_weight": 0, 00:43:26.204 "medium_priority_weight": 0, 00:43:26.204 "high_priority_weight": 0, 00:43:26.204 "nvme_adminq_poll_period_us": 10000, 00:43:26.204 "nvme_ioq_poll_period_us": 0, 00:43:26.204 "io_queue_requests": 512, 00:43:26.204 "delay_cmd_submit": true, 00:43:26.204 "transport_retry_count": 4, 00:43:26.204 "bdev_retry_count": 3, 00:43:26.204 "transport_ack_timeout": 0, 00:43:26.204 "ctrlr_loss_timeout_sec": 0, 00:43:26.204 "reconnect_delay_sec": 0, 00:43:26.204 "fast_io_fail_timeout_sec": 0, 00:43:26.204 "disable_auto_failback": false, 00:43:26.204 "generate_uuids": false, 00:43:26.204 "transport_tos": 0, 00:43:26.204 "nvme_error_stat": false, 00:43:26.204 "rdma_srq_size": 0, 00:43:26.204 "io_path_stat": false, 00:43:26.204 "allow_accel_sequence": false, 00:43:26.204 "rdma_max_cq_size": 0, 00:43:26.204 "rdma_cm_event_timeout_ms": 0, 00:43:26.204 "dhchap_digests": [ 00:43:26.204 "sha256", 00:43:26.204 "sha384", 00:43:26.204 "sha512" 00:43:26.204 ], 00:43:26.204 "dhchap_dhgroups": [ 00:43:26.204 "null", 00:43:26.204 "ffdhe2048", 00:43:26.204 "ffdhe3072", 00:43:26.204 "ffdhe4096", 00:43:26.204 "ffdhe6144", 00:43:26.204 "ffdhe8192" 00:43:26.204 ] 00:43:26.204 } 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "method": "bdev_nvme_attach_controller", 00:43:26.204 "params": { 00:43:26.204 "name": "nvme0", 00:43:26.204 "trtype": "TCP", 00:43:26.204 "adrfam": "IPv4", 00:43:26.204 "traddr": "127.0.0.1", 00:43:26.204 "trsvcid": "4420", 00:43:26.204 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:26.204 "prchk_reftag": false, 00:43:26.204 "prchk_guard": false, 00:43:26.204 "ctrlr_loss_timeout_sec": 0, 00:43:26.204 "reconnect_delay_sec": 0, 00:43:26.204 "fast_io_fail_timeout_sec": 0, 00:43:26.204 "psk": "key0", 00:43:26.204 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:26.204 "hdgst": false, 00:43:26.204 "ddgst": false, 00:43:26.204 "multipath": "multipath" 00:43:26.204 } 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "method": "bdev_nvme_set_hotplug", 00:43:26.204 "params": { 00:43:26.204 "period_us": 100000, 00:43:26.204 "enable": false 00:43:26.204 } 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "method": "bdev_wait_for_examine" 00:43:26.204 } 00:43:26.204 ] 00:43:26.204 }, 00:43:26.204 { 00:43:26.204 "subsystem": "nbd", 00:43:26.204 "config": [] 00:43:26.204 } 00:43:26.204 ] 00:43:26.205 }' 00:43:26.205 16:49:16 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:26.205 [2024-11-19 16:49:16.487041] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:43:26.205 [2024-11-19 16:49:16.487165] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482218 ] 00:43:26.463 [2024-11-19 16:49:16.552450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:26.463 [2024-11-19 16:49:16.604398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:26.463 [2024-11-19 16:49:16.781619] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:26.722 16:49:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:26.722 16:49:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:43:26.722 16:49:16 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:43:26.722 16:49:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:26.722 16:49:16 keyring_file -- keyring/file.sh@121 -- # jq length 00:43:26.980 16:49:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:43:26.980 16:49:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:43:26.980 16:49:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:43:26.980 16:49:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:26.980 16:49:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:26.980 16:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:26.980 16:49:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:43:27.238 16:49:17 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:43:27.238 16:49:17 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:43:27.238 16:49:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:43:27.238 16:49:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:43:27.238 16:49:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:27.238 16:49:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:43:27.238 16:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:27.496 16:49:17 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:43:27.496 16:49:17 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:43:27.496 16:49:17 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:43:27.496 16:49:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:43:27.777 16:49:17 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:43:27.777 16:49:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:43:27.777 16:49:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.2GLOe2Jw1F /tmp/tmp.U6qykMjaUJ 00:43:27.777 16:49:18 keyring_file -- keyring/file.sh@20 -- # killprocess 482218 00:43:27.777 16:49:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 482218 ']' 00:43:27.777 16:49:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 482218 00:43:27.777 16:49:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:27.777 16:49:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:27.777 16:49:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482218 00:43:27.777 16:49:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:27.777 16:49:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:27.777 16:49:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482218' 00:43:27.777 killing process with pid 482218 00:43:27.777 16:49:18 keyring_file -- common/autotest_common.sh@973 -- # kill 482218 00:43:27.777 Received shutdown signal, test time was about 1.000000 seconds 00:43:27.777 00:43:27.777 Latency(us) 00:43:27.777 [2024-11-19T15:49:18.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:27.777 [2024-11-19T15:49:18.116Z] =================================================================================================================== 00:43:27.777 [2024-11-19T15:49:18.116Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:43:27.777 16:49:18 keyring_file -- common/autotest_common.sh@978 -- # wait 482218 00:43:28.058 16:49:18 keyring_file -- keyring/file.sh@21 -- # killprocess 480741 00:43:28.058 16:49:18 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 480741 ']' 00:43:28.058 16:49:18 keyring_file -- common/autotest_common.sh@958 -- # kill -0 480741 00:43:28.058 16:49:18 keyring_file -- common/autotest_common.sh@959 -- # uname 00:43:28.058 16:49:18 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:28.058 16:49:18 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 480741 00:43:28.058 16:49:18 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:28.058 16:49:18 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:28.058 16:49:18 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 480741' 00:43:28.058 killing process with pid 480741 00:43:28.058 16:49:18 keyring_file -- common/autotest_common.sh@973 -- # kill 480741 00:43:28.058 16:49:18 keyring_file -- common/autotest_common.sh@978 -- # wait 480741 00:43:28.342 00:43:28.342 real 0m14.500s 00:43:28.342 user 0m37.069s 00:43:28.342 sys 0m3.188s 00:43:28.342 16:49:18 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:28.342 16:49:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:43:28.342 ************************************ 00:43:28.342 END TEST keyring_file 00:43:28.342 ************************************ 00:43:28.606 16:49:18 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:43:28.606 16:49:18 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:28.606 16:49:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:43:28.606 16:49:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:28.606 16:49:18 -- common/autotest_common.sh@10 -- # set +x 00:43:28.606 ************************************ 00:43:28.606 START TEST keyring_linux 00:43:28.606 ************************************ 00:43:28.607 16:49:18 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:43:28.607 Joined session keyring: 59577042 00:43:28.607 * Looking for test storage... 00:43:28.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:43:28.607 16:49:18 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:43:28.607 16:49:18 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:43:28.607 16:49:18 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:43:28.607 16:49:18 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@345 -- # : 1 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@368 -- # return 0 00:43:28.607 16:49:18 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:28.607 16:49:18 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:43:28.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.607 --rc genhtml_branch_coverage=1 00:43:28.607 --rc genhtml_function_coverage=1 00:43:28.607 --rc genhtml_legend=1 00:43:28.607 --rc geninfo_all_blocks=1 00:43:28.607 --rc geninfo_unexecuted_blocks=1 00:43:28.607 00:43:28.607 ' 00:43:28.607 16:49:18 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:43:28.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.607 --rc genhtml_branch_coverage=1 00:43:28.607 --rc genhtml_function_coverage=1 00:43:28.607 --rc genhtml_legend=1 00:43:28.607 --rc geninfo_all_blocks=1 00:43:28.607 --rc geninfo_unexecuted_blocks=1 00:43:28.607 00:43:28.607 ' 00:43:28.607 16:49:18 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:43:28.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.607 --rc genhtml_branch_coverage=1 00:43:28.607 --rc genhtml_function_coverage=1 00:43:28.607 --rc genhtml_legend=1 00:43:28.607 --rc geninfo_all_blocks=1 00:43:28.607 --rc geninfo_unexecuted_blocks=1 00:43:28.607 00:43:28.607 ' 00:43:28.607 16:49:18 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:43:28.607 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:28.607 --rc genhtml_branch_coverage=1 00:43:28.607 --rc genhtml_function_coverage=1 00:43:28.607 --rc genhtml_legend=1 00:43:28.607 --rc geninfo_all_blocks=1 00:43:28.607 --rc geninfo_unexecuted_blocks=1 00:43:28.607 00:43:28.607 ' 00:43:28.607 16:49:18 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:43:28.607 16:49:18 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:28.607 16:49:18 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:28.607 16:49:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.607 16:49:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.607 16:49:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.607 16:49:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:43:28.607 16:49:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:28.607 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:28.607 16:49:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:43:28.607 16:49:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:43:28.607 16:49:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:43:28.607 16:49:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:43:28.607 16:49:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:43:28.607 16:49:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:43:28.607 16:49:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:43:28.607 16:49:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:28.607 16:49:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:43:28.607 16:49:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:43:28.607 16:49:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:28.607 16:49:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:43:28.607 16:49:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:28.607 16:49:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:28.607 16:49:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:43:28.608 16:49:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:43:28.608 /tmp/:spdk-test:key0 00:43:28.608 16:49:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:43:28.608 16:49:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:43:28.608 16:49:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:43:28.608 16:49:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:43:28.608 16:49:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:43:28.608 16:49:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:43:28.608 16:49:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:43:28.608 16:49:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:43:28.608 16:49:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:43:28.608 16:49:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:43:28.608 16:49:18 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:43:28.608 16:49:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:43:28.608 16:49:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:43:28.866 16:49:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:43:28.866 16:49:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:43:28.866 /tmp/:spdk-test:key1 00:43:28.866 16:49:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=482702 00:43:28.866 16:49:18 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:43:28.866 16:49:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 482702 00:43:28.866 16:49:18 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 482702 ']' 00:43:28.866 16:49:18 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:28.866 16:49:18 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:28.866 16:49:18 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:28.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:28.866 16:49:18 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:28.866 16:49:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:28.866 [2024-11-19 16:49:19.010648] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:43:28.866 [2024-11-19 16:49:19.010743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482702 ] 00:43:28.866 [2024-11-19 16:49:19.076592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:28.866 [2024-11-19 16:49:19.121853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:29.125 16:49:19 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:29.125 [2024-11-19 16:49:19.380852] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:29.125 null0 00:43:29.125 [2024-11-19 16:49:19.412915] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:43:29.125 [2024-11-19 16:49:19.413429] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.125 16:49:19 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:43:29.125 21198486 00:43:29.125 16:49:19 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:43:29.125 827917754 00:43:29.125 16:49:19 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=482717 00:43:29.125 16:49:19 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:43:29.125 16:49:19 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 482717 /var/tmp/bperf.sock 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 482717 ']' 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:43:29.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:29.125 16:49:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:29.382 [2024-11-19 16:49:19.478393] Starting SPDK v25.01-pre git sha1 dcc2ca8f3 / DPDK 22.11.4 initialization... 00:43:29.382 [2024-11-19 16:49:19.478478] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482717 ] 00:43:29.382 [2024-11-19 16:49:19.545192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:43:29.382 [2024-11-19 16:49:19.591251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:43:29.382 16:49:19 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:29.382 16:49:19 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:43:29.382 16:49:19 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:43:29.382 16:49:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:43:29.642 16:49:19 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:43:29.642 16:49:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:43:30.209 16:49:20 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:30.209 16:49:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:43:30.467 [2024-11-19 16:49:20.612189] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:43:30.467 nvme0n1 00:43:30.467 16:49:20 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:43:30.467 16:49:20 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:43:30.467 16:49:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:30.467 16:49:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:30.467 16:49:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:30.467 16:49:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:30.725 16:49:20 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:43:30.725 16:49:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:30.725 16:49:20 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:43:30.725 16:49:20 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:43:30.725 16:49:20 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:43:30.725 16:49:20 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:43:30.725 16:49:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:30.984 16:49:21 keyring_linux -- keyring/linux.sh@25 -- # sn=21198486 00:43:30.984 16:49:21 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:43:30.984 16:49:21 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:30.984 16:49:21 keyring_linux -- keyring/linux.sh@26 -- # [[ 21198486 == \2\1\1\9\8\4\8\6 ]] 00:43:30.984 16:49:21 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 21198486 00:43:30.984 16:49:21 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:43:30.984 16:49:21 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:43:31.274 Running I/O for 1 seconds... 00:43:32.209 11414.00 IOPS, 44.59 MiB/s 00:43:32.209 Latency(us) 00:43:32.209 [2024-11-19T15:49:22.548Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:32.209 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:43:32.209 nvme0n1 : 1.01 11418.14 44.60 0.00 0.00 11143.25 3203.98 14854.83 00:43:32.209 [2024-11-19T15:49:22.548Z] =================================================================================================================== 00:43:32.209 [2024-11-19T15:49:22.548Z] Total : 11418.14 44.60 0.00 0.00 11143.25 3203.98 14854.83 00:43:32.209 { 00:43:32.209 "results": [ 00:43:32.209 { 00:43:32.209 "job": "nvme0n1", 00:43:32.210 "core_mask": "0x2", 00:43:32.210 "workload": "randread", 00:43:32.210 "status": "finished", 00:43:32.210 "queue_depth": 128, 00:43:32.210 "io_size": 4096, 00:43:32.210 "runtime": 1.010935, 00:43:32.210 "iops": 11418.142610553596, 00:43:32.210 "mibps": 44.60211957247498, 00:43:32.210 "io_failed": 0, 00:43:32.210 "io_timeout": 0, 00:43:32.210 "avg_latency_us": 11143.252236757246, 00:43:32.210 "min_latency_us": 3203.9822222222224, 00:43:32.210 "max_latency_us": 14854.826666666666 00:43:32.210 } 00:43:32.210 ], 00:43:32.210 "core_count": 1 00:43:32.210 } 00:43:32.210 16:49:22 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:43:32.210 16:49:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:43:32.469 16:49:22 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:43:32.469 16:49:22 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:43:32.469 16:49:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:43:32.469 16:49:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:43:32.469 16:49:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:43:32.469 16:49:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:43:32.727 16:49:22 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:43:32.727 16:49:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:43:32.727 16:49:22 keyring_linux -- keyring/linux.sh@23 -- # return 00:43:32.727 16:49:22 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:32.727 16:49:22 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:43:32.727 16:49:22 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:32.727 16:49:22 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:43:32.727 16:49:22 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:32.727 16:49:22 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:43:32.727 16:49:22 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:43:32.727 16:49:22 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:32.727 16:49:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:43:32.985 [2024-11-19 16:49:23.198581] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:43:32.985 [2024-11-19 16:49:23.199234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9900 (107): Transport endpoint is not connected 00:43:32.985 [2024-11-19 16:49:23.200223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de9900 (9): Bad file descriptor 00:43:32.985 [2024-11-19 16:49:23.201222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:43:32.985 [2024-11-19 16:49:23.201248] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:43:32.985 [2024-11-19 16:49:23.201263] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:43:32.985 [2024-11-19 16:49:23.201277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:43:32.985 request: 00:43:32.985 { 00:43:32.985 "name": "nvme0", 00:43:32.985 "trtype": "tcp", 00:43:32.985 "traddr": "127.0.0.1", 00:43:32.985 "adrfam": "ipv4", 00:43:32.985 "trsvcid": "4420", 00:43:32.985 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:32.985 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:32.985 "prchk_reftag": false, 00:43:32.985 "prchk_guard": false, 00:43:32.985 "hdgst": false, 00:43:32.985 "ddgst": false, 00:43:32.985 "psk": ":spdk-test:key1", 00:43:32.985 "allow_unrecognized_csi": false, 00:43:32.985 "method": "bdev_nvme_attach_controller", 00:43:32.985 "req_id": 1 00:43:32.985 } 00:43:32.985 Got JSON-RPC error response 00:43:32.985 response: 00:43:32.985 { 00:43:32.985 "code": -5, 00:43:32.985 "message": "Input/output error" 00:43:32.985 } 00:43:32.985 16:49:23 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:43:32.985 16:49:23 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:43:32.985 16:49:23 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:43:32.985 16:49:23 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@33 -- # sn=21198486 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 21198486 00:43:32.985 1 links removed 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@33 -- # sn=827917754 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 827917754 00:43:32.985 1 links removed 00:43:32.985 16:49:23 keyring_linux -- keyring/linux.sh@41 -- # killprocess 482717 00:43:32.986 16:49:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 482717 ']' 00:43:32.986 16:49:23 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 482717 00:43:32.986 16:49:23 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:32.986 16:49:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:32.986 16:49:23 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482717 00:43:32.986 16:49:23 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:43:32.986 16:49:23 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:43:32.986 16:49:23 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482717' 00:43:32.986 killing process with pid 482717 00:43:32.986 16:49:23 keyring_linux -- common/autotest_common.sh@973 -- # kill 482717 00:43:32.986 Received shutdown signal, test time was about 1.000000 seconds 00:43:32.986 00:43:32.986 Latency(us) 00:43:32.986 [2024-11-19T15:49:23.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:43:32.986 [2024-11-19T15:49:23.325Z] =================================================================================================================== 00:43:32.986 [2024-11-19T15:49:23.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:43:32.986 16:49:23 keyring_linux -- common/autotest_common.sh@978 -- # wait 482717 00:43:33.245 16:49:23 keyring_linux -- keyring/linux.sh@42 -- # killprocess 482702 00:43:33.245 16:49:23 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 482702 ']' 00:43:33.245 16:49:23 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 482702 00:43:33.245 16:49:23 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:43:33.245 16:49:23 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:33.245 16:49:23 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482702 00:43:33.245 16:49:23 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:33.245 16:49:23 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:33.245 16:49:23 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482702' 00:43:33.245 killing process with pid 482702 00:43:33.245 16:49:23 keyring_linux -- common/autotest_common.sh@973 -- # kill 482702 00:43:33.245 16:49:23 keyring_linux -- common/autotest_common.sh@978 -- # wait 482702 00:43:33.504 00:43:33.504 real 0m5.106s 00:43:33.504 user 0m10.227s 00:43:33.504 sys 0m1.612s 00:43:33.504 16:49:23 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:33.504 16:49:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:43:33.504 ************************************ 00:43:33.504 END TEST keyring_linux 00:43:33.504 ************************************ 00:43:33.504 16:49:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:43:33.504 16:49:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:43:33.504 16:49:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:43:33.763 16:49:23 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:43:33.763 16:49:23 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:43:33.763 16:49:23 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:43:33.763 16:49:23 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:43:33.763 16:49:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:33.763 16:49:23 -- common/autotest_common.sh@10 -- # set +x 00:43:33.763 16:49:23 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:43:33.763 16:49:23 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:43:33.763 16:49:23 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:43:33.763 16:49:23 -- common/autotest_common.sh@10 -- # set +x 00:43:35.669 INFO: APP EXITING 00:43:35.669 INFO: killing all VMs 00:43:35.669 INFO: killing vhost app 00:43:35.669 INFO: EXIT DONE 00:43:36.607 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:43:36.607 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:43:36.607 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:43:36.607 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:43:36.607 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:43:36.607 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:43:36.607 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:43:36.607 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:43:36.607 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:43:36.607 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:43:36.607 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:43:36.607 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:43:36.607 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:43:36.607 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:43:36.865 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:43:36.866 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:43:36.866 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:43:38.246 Cleaning 00:43:38.246 Removing: /var/run/dpdk/spdk0/config 00:43:38.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:43:38.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:43:38.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:43:38.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:43:38.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:43:38.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:43:38.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:43:38.246 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:43:38.246 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:43:38.246 Removing: /var/run/dpdk/spdk0/hugepage_info 00:43:38.246 Removing: /var/run/dpdk/spdk1/config 00:43:38.246 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:43:38.246 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:43:38.246 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:43:38.246 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:43:38.246 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:43:38.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:43:38.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:43:38.247 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:43:38.247 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:43:38.247 Removing: /var/run/dpdk/spdk1/hugepage_info 00:43:38.247 Removing: /var/run/dpdk/spdk2/config 00:43:38.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:43:38.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:43:38.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:43:38.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:43:38.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:43:38.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:43:38.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:43:38.247 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:43:38.247 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:43:38.247 Removing: /var/run/dpdk/spdk2/hugepage_info 00:43:38.247 Removing: /var/run/dpdk/spdk3/config 00:43:38.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:43:38.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:43:38.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:43:38.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:43:38.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:43:38.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:43:38.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:43:38.247 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:43:38.247 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:43:38.247 Removing: /var/run/dpdk/spdk3/hugepage_info 00:43:38.247 Removing: /var/run/dpdk/spdk4/config 00:43:38.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:43:38.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:43:38.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:43:38.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:43:38.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:43:38.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:43:38.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:43:38.247 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:43:38.247 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:43:38.247 Removing: /var/run/dpdk/spdk4/hugepage_info 00:43:38.247 Removing: /dev/shm/bdev_svc_trace.1 00:43:38.247 Removing: /dev/shm/nvmf_trace.0 00:43:38.247 Removing: /dev/shm/spdk_tgt_trace.pid99288 00:43:38.247 Removing: /var/run/dpdk/spdk0 00:43:38.247 Removing: /var/run/dpdk/spdk1 00:43:38.247 Removing: /var/run/dpdk/spdk2 00:43:38.247 Removing: /var/run/dpdk/spdk3 00:43:38.247 Removing: /var/run/dpdk/spdk4 00:43:38.247 Removing: /var/run/dpdk/spdk_pid100386 00:43:38.247 Removing: /var/run/dpdk/spdk_pid100528 00:43:38.247 Removing: /var/run/dpdk/spdk_pid101237 00:43:38.247 Removing: /var/run/dpdk/spdk_pid101256 00:43:38.247 Removing: /var/run/dpdk/spdk_pid101514 00:43:38.247 Removing: /var/run/dpdk/spdk_pid102822 00:43:38.247 Removing: /var/run/dpdk/spdk_pid103766 00:43:38.247 Removing: /var/run/dpdk/spdk_pid103966 00:43:38.247 Removing: /var/run/dpdk/spdk_pid104251 00:43:38.247 Removing: /var/run/dpdk/spdk_pid104491 00:43:38.247 Removing: /var/run/dpdk/spdk_pid104690 00:43:38.247 Removing: /var/run/dpdk/spdk_pid104851 00:43:38.247 Removing: /var/run/dpdk/spdk_pid105003 00:43:38.247 Removing: /var/run/dpdk/spdk_pid105193 00:43:38.247 Removing: /var/run/dpdk/spdk_pid105505 00:43:38.247 Removing: /var/run/dpdk/spdk_pid107929 00:43:38.247 Removing: /var/run/dpdk/spdk_pid108152 00:43:38.247 Removing: /var/run/dpdk/spdk_pid108324 00:43:38.247 Removing: /var/run/dpdk/spdk_pid108333 00:43:38.247 Removing: /var/run/dpdk/spdk_pid108638 00:43:38.247 Removing: /var/run/dpdk/spdk_pid108763 00:43:38.247 Removing: /var/run/dpdk/spdk_pid109070 00:43:38.247 Removing: /var/run/dpdk/spdk_pid109075 00:43:38.247 Removing: /var/run/dpdk/spdk_pid109363 00:43:38.247 Removing: /var/run/dpdk/spdk_pid109373 00:43:38.247 Removing: /var/run/dpdk/spdk_pid109537 00:43:38.247 Removing: /var/run/dpdk/spdk_pid109667 00:43:38.247 Removing: /var/run/dpdk/spdk_pid110050 00:43:38.247 Removing: /var/run/dpdk/spdk_pid110203 00:43:38.247 Removing: /var/run/dpdk/spdk_pid110404 00:43:38.247 Removing: /var/run/dpdk/spdk_pid112642 00:43:38.247 Removing: /var/run/dpdk/spdk_pid115169 00:43:38.247 Removing: /var/run/dpdk/spdk_pid122152 00:43:38.247 Removing: /var/run/dpdk/spdk_pid122677 00:43:38.247 Removing: /var/run/dpdk/spdk_pid125086 00:43:38.247 Removing: /var/run/dpdk/spdk_pid125359 00:43:38.247 Removing: /var/run/dpdk/spdk_pid128114 00:43:38.247 Removing: /var/run/dpdk/spdk_pid132343 00:43:38.247 Removing: /var/run/dpdk/spdk_pid134529 00:43:38.247 Removing: /var/run/dpdk/spdk_pid140827 00:43:38.247 Removing: /var/run/dpdk/spdk_pid146200 00:43:38.247 Removing: /var/run/dpdk/spdk_pid147431 00:43:38.247 Removing: /var/run/dpdk/spdk_pid148104 00:43:38.247 Removing: /var/run/dpdk/spdk_pid158489 00:43:38.247 Removing: /var/run/dpdk/spdk_pid160776 00:43:38.247 Removing: /var/run/dpdk/spdk_pid216704 00:43:38.247 Removing: /var/run/dpdk/spdk_pid220030 00:43:38.247 Removing: /var/run/dpdk/spdk_pid223969 00:43:38.247 Removing: /var/run/dpdk/spdk_pid228741 00:43:38.247 Removing: /var/run/dpdk/spdk_pid228743 00:43:38.247 Removing: /var/run/dpdk/spdk_pid229400 00:43:38.247 Removing: /var/run/dpdk/spdk_pid229974 00:43:38.247 Removing: /var/run/dpdk/spdk_pid230593 00:43:38.247 Removing: /var/run/dpdk/spdk_pid230990 00:43:38.247 Removing: /var/run/dpdk/spdk_pid230994 00:43:38.247 Removing: /var/run/dpdk/spdk_pid231254 00:43:38.247 Removing: /var/run/dpdk/spdk_pid231393 00:43:38.247 Removing: /var/run/dpdk/spdk_pid231397 00:43:38.247 Removing: /var/run/dpdk/spdk_pid232051 00:43:38.247 Removing: /var/run/dpdk/spdk_pid232590 00:43:38.247 Removing: /var/run/dpdk/spdk_pid233244 00:43:38.247 Removing: /var/run/dpdk/spdk_pid233639 00:43:38.247 Removing: /var/run/dpdk/spdk_pid233651 00:43:38.247 Removing: /var/run/dpdk/spdk_pid233907 00:43:38.247 Removing: /var/run/dpdk/spdk_pid234799 00:43:38.247 Removing: /var/run/dpdk/spdk_pid235528 00:43:38.247 Removing: /var/run/dpdk/spdk_pid240856 00:43:38.247 Removing: /var/run/dpdk/spdk_pid269077 00:43:38.247 Removing: /var/run/dpdk/spdk_pid271995 00:43:38.247 Removing: /var/run/dpdk/spdk_pid273168 00:43:38.247 Removing: /var/run/dpdk/spdk_pid274599 00:43:38.247 Removing: /var/run/dpdk/spdk_pid274739 00:43:38.247 Removing: /var/run/dpdk/spdk_pid275034 00:43:38.247 Removing: /var/run/dpdk/spdk_pid275479 00:43:38.247 Removing: /var/run/dpdk/spdk_pid275965 00:43:38.247 Removing: /var/run/dpdk/spdk_pid277281 00:43:38.247 Removing: /var/run/dpdk/spdk_pid278016 00:43:38.247 Removing: /var/run/dpdk/spdk_pid278450 00:43:38.247 Removing: /var/run/dpdk/spdk_pid279986 00:43:38.247 Removing: /var/run/dpdk/spdk_pid280361 00:43:38.247 Removing: /var/run/dpdk/spdk_pid280917 00:43:38.507 Removing: /var/run/dpdk/spdk_pid283300 00:43:38.507 Removing: /var/run/dpdk/spdk_pid286603 00:43:38.507 Removing: /var/run/dpdk/spdk_pid286604 00:43:38.507 Removing: /var/run/dpdk/spdk_pid286605 00:43:38.507 Removing: /var/run/dpdk/spdk_pid288817 00:43:38.507 Removing: /var/run/dpdk/spdk_pid291016 00:43:38.507 Removing: /var/run/dpdk/spdk_pid294431 00:43:38.507 Removing: /var/run/dpdk/spdk_pid317620 00:43:38.507 Removing: /var/run/dpdk/spdk_pid320387 00:43:38.507 Removing: /var/run/dpdk/spdk_pid324169 00:43:38.507 Removing: /var/run/dpdk/spdk_pid325121 00:43:38.507 Removing: /var/run/dpdk/spdk_pid326216 00:43:38.507 Removing: /var/run/dpdk/spdk_pid327291 00:43:38.507 Removing: /var/run/dpdk/spdk_pid330050 00:43:38.507 Removing: /var/run/dpdk/spdk_pid332516 00:43:38.507 Removing: /var/run/dpdk/spdk_pid334889 00:43:38.507 Removing: /var/run/dpdk/spdk_pid339726 00:43:38.507 Removing: /var/run/dpdk/spdk_pid339730 00:43:38.507 Removing: /var/run/dpdk/spdk_pid342625 00:43:38.507 Removing: /var/run/dpdk/spdk_pid342761 00:43:38.507 Removing: /var/run/dpdk/spdk_pid342902 00:43:38.507 Removing: /var/run/dpdk/spdk_pid343269 00:43:38.507 Removing: /var/run/dpdk/spdk_pid343288 00:43:38.507 Removing: /var/run/dpdk/spdk_pid344367 00:43:38.507 Removing: /var/run/dpdk/spdk_pid345544 00:43:38.507 Removing: /var/run/dpdk/spdk_pid346719 00:43:38.507 Removing: /var/run/dpdk/spdk_pid347893 00:43:38.507 Removing: /var/run/dpdk/spdk_pid349076 00:43:38.507 Removing: /var/run/dpdk/spdk_pid350366 00:43:38.507 Removing: /var/run/dpdk/spdk_pid354181 00:43:38.507 Removing: /var/run/dpdk/spdk_pid354525 00:43:38.507 Removing: /var/run/dpdk/spdk_pid355823 00:43:38.507 Removing: /var/run/dpdk/spdk_pid356618 00:43:38.507 Removing: /var/run/dpdk/spdk_pid360275 00:43:38.507 Removing: /var/run/dpdk/spdk_pid362244 00:43:38.507 Removing: /var/run/dpdk/spdk_pid365769 00:43:38.507 Removing: /var/run/dpdk/spdk_pid369596 00:43:38.507 Removing: /var/run/dpdk/spdk_pid376076 00:43:38.507 Removing: /var/run/dpdk/spdk_pid380556 00:43:38.507 Removing: /var/run/dpdk/spdk_pid380558 00:43:38.507 Removing: /var/run/dpdk/spdk_pid393071 00:43:38.507 Removing: /var/run/dpdk/spdk_pid393596 00:43:38.507 Removing: /var/run/dpdk/spdk_pid394004 00:43:38.507 Removing: /var/run/dpdk/spdk_pid394410 00:43:38.507 Removing: /var/run/dpdk/spdk_pid394987 00:43:38.507 Removing: /var/run/dpdk/spdk_pid395397 00:43:38.507 Removing: /var/run/dpdk/spdk_pid395800 00:43:38.507 Removing: /var/run/dpdk/spdk_pid396212 00:43:38.507 Removing: /var/run/dpdk/spdk_pid398711 00:43:38.507 Removing: /var/run/dpdk/spdk_pid398965 00:43:38.507 Removing: /var/run/dpdk/spdk_pid403392 00:43:38.507 Removing: /var/run/dpdk/spdk_pid403449 00:43:38.507 Removing: /var/run/dpdk/spdk_pid406801 00:43:38.507 Removing: /var/run/dpdk/spdk_pid409388 00:43:38.507 Removing: /var/run/dpdk/spdk_pid416212 00:43:38.507 Removing: /var/run/dpdk/spdk_pid416614 00:43:38.507 Removing: /var/run/dpdk/spdk_pid419120 00:43:38.507 Removing: /var/run/dpdk/spdk_pid419394 00:43:38.507 Removing: /var/run/dpdk/spdk_pid421890 00:43:38.507 Removing: /var/run/dpdk/spdk_pid425573 00:43:38.507 Removing: /var/run/dpdk/spdk_pid427739 00:43:38.507 Removing: /var/run/dpdk/spdk_pid433979 00:43:38.507 Removing: /var/run/dpdk/spdk_pid439315 00:43:38.507 Removing: /var/run/dpdk/spdk_pid440456 00:43:38.507 Removing: /var/run/dpdk/spdk_pid441096 00:43:38.507 Removing: /var/run/dpdk/spdk_pid451053 00:43:38.507 Removing: /var/run/dpdk/spdk_pid453294 00:43:38.507 Removing: /var/run/dpdk/spdk_pid455700 00:43:38.507 Removing: /var/run/dpdk/spdk_pid460623 00:43:38.507 Removing: /var/run/dpdk/spdk_pid460741 00:43:38.507 Removing: /var/run/dpdk/spdk_pid463643 00:43:38.507 Removing: /var/run/dpdk/spdk_pid464922 00:43:38.507 Removing: /var/run/dpdk/spdk_pid466329 00:43:38.507 Removing: /var/run/dpdk/spdk_pid467189 00:43:38.507 Removing: /var/run/dpdk/spdk_pid468744 00:43:38.507 Removing: /var/run/dpdk/spdk_pid469999 00:43:38.507 Removing: /var/run/dpdk/spdk_pid475291 00:43:38.507 Removing: /var/run/dpdk/spdk_pid475679 00:43:38.507 Removing: /var/run/dpdk/spdk_pid476067 00:43:38.507 Removing: /var/run/dpdk/spdk_pid477622 00:43:38.507 Removing: /var/run/dpdk/spdk_pid477924 00:43:38.507 Removing: /var/run/dpdk/spdk_pid478298 00:43:38.507 Removing: /var/run/dpdk/spdk_pid480741 00:43:38.507 Removing: /var/run/dpdk/spdk_pid480754 00:43:38.507 Removing: /var/run/dpdk/spdk_pid482218 00:43:38.507 Removing: /var/run/dpdk/spdk_pid482702 00:43:38.507 Removing: /var/run/dpdk/spdk_pid482717 00:43:38.507 Removing: /var/run/dpdk/spdk_pid97178 00:43:38.507 Removing: /var/run/dpdk/spdk_pid98422 00:43:38.507 Removing: /var/run/dpdk/spdk_pid99288 00:43:38.507 Removing: /var/run/dpdk/spdk_pid99695 00:43:38.507 Clean 00:43:38.766 16:49:28 -- common/autotest_common.sh@1453 -- # return 0 00:43:38.766 16:49:28 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:43:38.766 16:49:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:38.766 16:49:28 -- common/autotest_common.sh@10 -- # set +x 00:43:38.766 16:49:28 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:43:38.766 16:49:28 -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:38.766 16:49:28 -- common/autotest_common.sh@10 -- # set +x 00:43:38.766 16:49:28 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:43:38.766 16:49:28 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:43:38.766 16:49:28 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:43:38.766 16:49:28 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:43:38.766 16:49:28 -- spdk/autotest.sh@398 -- # hostname 00:43:38.766 16:49:28 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:43:39.031 geninfo: WARNING: invalid characters removed from testname! 00:44:11.107 16:49:59 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:13.635 16:50:03 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:16.918 16:50:06 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:19.447 16:50:09 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:22.740 16:50:12 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:25.268 16:50:15 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:44:28.551 16:50:18 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:44:28.551 16:50:18 -- spdk/autorun.sh@1 -- $ timing_finish 00:44:28.551 16:50:18 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:44:28.551 16:50:18 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:44:28.551 16:50:18 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:44:28.551 16:50:18 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:28.551 + [[ -n 5610 ]] 00:44:28.551 + sudo kill 5610 00:44:28.563 [Pipeline] } 00:44:28.579 [Pipeline] // stage 00:44:28.584 [Pipeline] } 00:44:28.598 [Pipeline] // timeout 00:44:28.603 [Pipeline] } 00:44:28.617 [Pipeline] // catchError 00:44:28.623 [Pipeline] } 00:44:28.639 [Pipeline] // wrap 00:44:28.646 [Pipeline] } 00:44:28.659 [Pipeline] // catchError 00:44:28.669 [Pipeline] stage 00:44:28.671 [Pipeline] { (Epilogue) 00:44:28.685 [Pipeline] catchError 00:44:28.687 [Pipeline] { 00:44:28.700 [Pipeline] echo 00:44:28.701 Cleanup processes 00:44:28.707 [Pipeline] sh 00:44:28.995 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:28.995 494652 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:29.012 [Pipeline] sh 00:44:29.325 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:44:29.325 ++ awk '{print $1}' 00:44:29.325 ++ grep -v 'sudo pgrep' 00:44:29.325 + sudo kill -9 00:44:29.325 + true 00:44:29.338 [Pipeline] sh 00:44:29.622 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:44:41.839 [Pipeline] sh 00:44:42.132 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:44:42.132 Artifacts sizes are good 00:44:42.148 [Pipeline] archiveArtifacts 00:44:42.155 Archiving artifacts 00:44:42.626 [Pipeline] sh 00:44:42.983 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:44:42.999 [Pipeline] cleanWs 00:44:43.010 [WS-CLEANUP] Deleting project workspace... 00:44:43.010 [WS-CLEANUP] Deferred wipeout is used... 00:44:43.018 [WS-CLEANUP] done 00:44:43.020 [Pipeline] } 00:44:43.037 [Pipeline] // catchError 00:44:43.049 [Pipeline] sh 00:44:43.335 + logger -p user.info -t JENKINS-CI 00:44:43.343 [Pipeline] } 00:44:43.357 [Pipeline] // stage 00:44:43.361 [Pipeline] } 00:44:43.376 [Pipeline] // node 00:44:43.381 [Pipeline] End of Pipeline 00:44:43.424 Finished: SUCCESS